3 Sources
[1]
Tech firms fight to stem deepfake deluge
Tech firms are fighting the scourge of deepfakes, those deceptively realistic voices or videos used by scammers that are more available than ever thanks to artificial intelligence. Ever-improving generative artificial intelligence (GenAI) tools have become weapons in the hands of bad actors intent on tricking people out of their money or even their identities. Debby Bodkin tells of her 93-year-old mother receiving a telephone call, a cloned voice claiming, "It's me, mom... I've had an accident." When asked where they were, the machine-made impersonator named a hospital. Fortunately, it was a granddaughter who answered the phone, opting to hang up and call Bodkin at work where she was safe and well. "It's not the first time scammers have called grandma," Bodkin told AFP. "It's daily." Such deepfake phone scams typically go on to coax victims into paying for medical care or other made-up emergencies. Used on social networks to hijack the notoriety of celebrities or other high-profile figures, sometimes for disinformation, deepfakes are also being exploited by criminal gangs. Hong Kong police earlier this year revealed that a multinational firm employee was tricked into wiring HK$200 million (around US$26 million) to crooks who staged a videoconference with AI avatars of his colleagues. A recent study by identification start-up iBoom found that a scant tenth of 1% of Americans and Britons were able to correctly tell when a picture or video was a deepfake. A decade ago, there was a single AI tool for generating synthetic voices -- now there are hundreds of them, according to voice authentication specialist Vijay Balasubramaniyan, CEO of Pindrop Security. GenAI has changed the game, he said. "Before, it took 20 hours (of voice recording) to recreate your voice," the executive told AFP. "Now, it's five seconds." Firms such as Intel have stepped up with tools to detect GenAI-made audio or video in real-time. Intel "FakeCatcher" detects color changes in facial blood vessels to distinguish genuine from bogus imagery. Pindrop breaks down every second of audio and compares it with characteristics of a human voice. "You have to keep up with the times," says Nicos Vekiarides, chief of Attestiv platform which specializes in authenticating digital creations. "In the beginning, we saw people with six fingers on one hand, but progress has made it harder and harder to tell (deepfakes) with the naked eye." 'Global cybersecurity threat' Balasubramaniyan believes that software for spotting AI content will become standard at companies of all kinds. While GenAI has blurred the boundary between human and machine, companies that re-establish that divide could soar in a market that will be worth billions of dollars, he said. Vekiarides warned that the issue "is becoming a global cybersecurity threat." "Any company can have its reputation tarnished by a deepfake or be targeted by these sophisticated attacks," Vekiarides said. Balasubramaniyan added that the shift to telework provides more opportunity for bad actors to impersonate their way into companies. Beyond the corporate world, many expect consumers to look for ways to fight off deepfake scams endangering their personal lives. In January, China-based Honor unveiled a Magic7 smartphone with a built-in deepfake detector powered by AI. British start-up Surf Security late last year launched a web browser that can flag synthetic voice or video, aiming it at businesses. Siwei Lyu, a professor of computer science at the State University of New York at Buffalo, believes "deepfakes will become like spam," an internet nightmare that people eventually get under control. "Those detection algorithms will be like spam filters in our email software," Lyu predicted.
[2]
Tech firms fight to stem deepfake deluge
Las Vegas (AFP) - Tech firms are fighting the scourge of deepfakes, those deceptively realistic voices or videos used by scammers that are more available than ever thanks to artificial intelligence. Ever-improving generative artificial intelligence (GenAI) tools have become weapons in the hands of bad actors intent on tricking people out of their money or even their identities. Debby Bodkin tells of her 93-year-old mother receiving a telephone call, a cloned voice claiming, "It's me, mom... I've had an accident." When asked where they were, the machine-made impersonator named a hospital. Fortunately, it was a granddaughter who answered the phone, opting to hang up and call Bodkin at work where she was safe and well. "It's not the first time scammers have called grandma," Bodkin told AFP. "It's daily." Such deepfake phone scams typically go on to coax victims into paying for medical care or other made-up emergencies. Used on social networks to hijack the notoriety of celebrities or other high-profile figures, sometimes for disinformation, deepfakes are also being exploited by criminal gangs. Hong Kong police earlier this year revealed that a multinational firm employee was tricked into wiring HK$200 million (around US$26 million) to crooks who staged a videoconference with AI avatars of his colleagues. A recent study by identification start-up iBoom found that a scant tenth of one percent of Americans and Britons were able to correctly tell when a picture or video was a deepfake. A decade ago, there was a single AI tool for generating synthetic voices -- now there are hundreds of them, according to voice authentication specialist Vijay Balasubramaniyan, CEO of Pindrop Security. GenAI has changed the game, he said. "Before, it took 20 hours (of voice recording) to recreate your voice," the executive told AFP. "Now, it's five seconds." Firms such as Intel have stepped up with tools to detect GenAI-made audio or video in real-time. Intel "FakeCatcher" detects color changes in facial blood vessels to distinguish genuine from bogus imagery. Pindrop breaks down every second of audio and compares it with characteristics of a human voice. "You have to keep up with the times," says Nicos Vekiarides, chief of Attestiv platform which specializes in authenticating digital creations. "In the beginning, we saw people with six fingers on one hand, but progress has made it harder and harder to tell (deepfakes) with the naked eye." - 'Global cybersecurity threat' - Balasubramaniyan believes that software for spotting AI content will become standard at companies of all kinds. While GenAI has blurred the boundary between human and machine, companies that re-establish that divide could soar in a market that will be worth billions of dollars, he said. Vekiarides warned that the issue "is becoming a global cybersecurity threat." "Any company can have its reputation tarnished by a deepfake or be targeted by these sophisticated attacks," Vekiarides said. Balasubramaniyan added that the shift to telework provides more opportunity for bad actors to impersonate their way into companies. Beyond the corporate world, many expect consumers to look for ways to fight off deepfake scams endangering their personal lives. In January, China-based Honor unveiled a Magic7 smartphone with a built-in deepfake detector powered by AI. British start-up Surf Security late last year launched a web browser that can flag synthetic voice or video, aiming it at businesses. Siwei Lyu, a professor of computer science at the State University of New York at Buffalo, believes "deepfakes will become like spam," an internet nightmare that people eventually get under control. "Those detection algorithms will be like spam filters in our email software," Lyu predicted.
[3]
Tech firms fight to stem deepfake deluge
Used on social networks to hijack the notoriety of celebrities or other high-profile figures, sometimes for disinformation, deepfakes are also being exploited by criminal gangs. Hong Kong police earlier this year revealed that a multinational firm employee was tricked into wiring HK$200 million (around US$26 million) to crooks who staged a videoconference with AI avatars of his colleagues. Tech firms are fighting the scourge of deepfakes, those deceptively realistic voices or videos used by scammers that are more available than ever thanks to artificial intelligence. Ever-improving generative artificial intelligence (GenAI) tools have become weapons in the hands of bad actors intent on tricking people out of their money or even their identities. Debby Bodkin tells of her 93-year-old mother receiving a telephone call, a cloned voice claiming, "It's me, mom... I've had an accident." When asked where they were, the machine-made impersonator named a hospital. Fortunately, it was a granddaughter who answered the phone, opting to hang up and call Bodkin at work where she was safe and well. "It's not the first time scammers have called grandma," Bodkin told AFP. "It's daily." Such deepfake phone scams typically go on to coax victims into paying for medical care or other made-up emergencies. Used on social networks to hijack the notoriety of celebrities or other high-profile figures, sometimes for disinformation, deepfakes are also being exploited by criminal gangs. Hong Kong police earlier this year revealed that a multinational firm employee was tricked into wiring HK$200 million (around US$26 million) to crooks who staged a videoconference with AI avatars of his colleagues. A recent study by identification start-up iBoom found that a scant tenth of one percent of Americans and Britons were able to correctly tell when a picture or video was a deepfake. A decade ago, there was a single AI tool for generating synthetic voices -- now there are hundreds of them, according to voice authentication specialist Vijay Balasubramaniyan, CEO of Pindrop Security. GenAI has changed the game, he said. "Before, it took 20 hours (of voice recording) to recreate your voice," the executive told AFP. "Now, it's five seconds." Firms such as Intel have stepped up with tools to detect GenAI-made audio or video in real-time. Intel "FakeCatcher" detects color changes in facial blood vessels to distinguish genuine from bogus imagery. Pindrop breaks down every second of audio and compares it with characteristics of a human voice. "You have to keep up with the times," says Nicos Vekiarides, chief of Attestiv platform which specializes in authenticating digital creations. "In the beginning, we saw people with six fingers on one hand, but progress has made it harder and harder to tell (deepfakes) with the naked eye." 'Global cybersecurity threat' Balasubramaniyan believes that software for spotting AI content will become standard at companies of all kinds. While GenAI has blurred the boundary between human and machine, companies that re-establish that divide could soar in a market that will be worth billions of dollars, he said. Vekiarides warned that the issue "is becoming a global cybersecurity threat." "Any company can have its reputation tarnished by a deepfake or be targeted by these sophisticated attacks," Vekiarides said. Balasubramaniyan added that the shift to telework provides more opportunity for bad actors to impersonate their way into companies. Beyond the corporate world, many expect consumers to look for ways to fight off deepfake scams endangering their personal lives. In January, China-based Honor unveiled a Magic7 smartphone with a built-in deepfake detector powered by AI. British start-up Surf Security late last year launched a web browser that can flag synthetic voice or video, aiming it at businesses. Siwei Lyu, a professor of computer science at the State University of New York at Buffalo, believes "deepfakes will become like spam," an internet nightmare that people eventually get under control. "Those detection algorithms will be like spam filters in our email software," Lyu predicted. "We're not there yet."
Share
Copy Link
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
In an era of rapidly advancing artificial intelligence, deepfakes have emerged as a significant cybersecurity concern. These deceptively realistic AI-generated voices and videos are increasingly being weaponized by scammers and criminal organizations, posing threats to individuals and businesses alike 123.
The scale of the problem is alarming. A recent study by identification start-up iBoom revealed that only 0.1% of Americans and Britons could accurately identify deepfake images or videos 1. This widespread inability to detect synthetic media underscores the urgency of developing effective countermeasures.
The landscape of voice synthesis has changed dramatically in recent years. Vijay Balasubramaniyan, CEO of Pindrop Security, notes that a decade ago, there was only one AI tool for generating synthetic voices. Today, hundreds exist 1. The efficiency of these tools has also improved significantly:
"Before, it took 20 hours (of voice recording) to recreate your voice," Balasubramaniyan told AFP. "Now, it's five seconds." 2
This rapid advancement has made it easier for scammers to create convincing voice clones, leading to an increase in deepfake phone scams targeting vulnerable individuals.
The consequences of deepfake scams can be severe. In a striking example, Hong Kong police reported that an employee of a multinational firm was tricked into transferring HK$200 million (approximately US$26 million) to fraudsters who used AI avatars to impersonate the victim's colleagues in a video conference 123.
On a more personal level, Debby Bodkin shared an anecdote about her 93-year-old mother receiving a call from a cloned voice claiming to be a relative in an accident. While this particular attempt was thwarted, Bodkin noted that such scam calls targeting her grandmother occur "daily" 12.
In response to the growing deepfake threat, tech firms are developing sophisticated detection and authentication tools:
Intel's "FakeCatcher" detects color changes in facial blood vessels to distinguish between genuine and synthetic imagery 12.
Pindrop Security's technology analyzes audio second-by-second, comparing it to characteristics of human voices 12.
Attestiv platform specializes in authenticating digital creations, adapting to the increasing sophistication of deepfakes 12.
Experts predict that deepfake detection software will become standard across industries. Balasubramaniyan believes that companies capable of distinguishing between human and machine-generated content could thrive in a market potentially worth billions 12.
Consumer-oriented solutions are also emerging. China-based Honor has introduced a Magic7 smartphone with a built-in AI-powered deepfake detector 123. Meanwhile, British start-up Surf Security has launched a web browser capable of flagging synthetic voice or video, primarily targeting businesses 123.
The proliferation of deepfakes presents a global cybersecurity threat, with potential impacts on corporate reputations and security. The shift to remote work has further increased vulnerabilities, providing more opportunities for bad actors to impersonate their way into companies 12.
Siwei Lyu, a professor of computer science at the State University of New York at Buffalo, draws a parallel between deepfakes and spam, suggesting that detection algorithms may eventually become as commonplace as email spam filters 123. However, he acknowledges that "We're not there yet" 3, indicating that the battle against deepfakes is far from over.
Nvidia's new Blackwell GPUs show significant performance gains in AI model training, particularly for large language models, according to the latest MLPerf benchmarks. AMD's latest GPUs show progress but remain a generation behind Nvidia.
5 Sources
Technology
2 hrs ago
5 Sources
Technology
2 hrs ago
Reddit has filed a lawsuit against AI startup Anthropic, accusing the company of using Reddit's data without permission to train its AI models, including the chatbot Claude. This legal action marks a significant moment in the ongoing debate over AI companies' use of online content for training purposes.
14 Sources
Policy and Regulation
2 hrs ago
14 Sources
Policy and Regulation
2 hrs ago
Nvidia's stock has rebounded by $1.5 trillion in two months, with investors optimistic about its future growth in AI chip market despite geopolitical challenges.
3 Sources
Technology
18 hrs ago
3 Sources
Technology
18 hrs ago
OpenAI announces a significant increase in its business user base and introduces new AI-powered features for the workplace, intensifying competition in the enterprise AI market.
3 Sources
Technology
2 hrs ago
3 Sources
Technology
2 hrs ago
Apple's partnership with Alibaba to launch AI services in China faces regulatory hurdles due to escalating trade war between the US and China, potentially impacting iPhone sales in a key market.
7 Sources
Business and Economy
2 hrs ago
7 Sources
Business and Economy
2 hrs ago