2 Sources
[1]
Deepfakes of Elon Musk contribute to billions in fraud losses in the U.S.
EULESS - She first saw the ad on Facebook. And then again on TikTok. After seeing what appeared to be Elon Musk offering an investment opportunity over and over again, Heidi Swan figured it had to be true. "Looked just like Elon Musk, sounded just like Elon Musk and I thought it was him," said Swan. She contacted the company behind the pitch and opened an account for more than $10,000. The 62-year-old healthcare worker thought she was making a smart investment in cryptocurrency from a businessman and investor worth billions of dollars. But Swan would soon learn she'd been scammed by a new wave of high-tech thieves using artificial intelligence to create deepfakes. Even looking back at the videos now, knowing they're fakes, Swan still thinks they look convincing. "They still look like Elon Musk," she said. "They still sound like Elon Musk." As artificial intelligence technology evolves and becomes more accessible, these kinds of scams are becoming more common. According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year and could reach $40 billion in the U.S. by 2027. Both the Federal Trade Commission and the Better Business Bureau have issued warnings that deepfake scams are on the rise. A study by AI firm Sensity found that Elon Musk is the most common celebrity used in deepfake scams. One likely reason is his wealth and entrepreneurship. Another reason is because of the amount of interviews he's done; the more content there is of someone online, the easier it is to create convincing deepfakes. At the University of North Texas in Denton, Professor Christopher Meerdo is also using artificial intelligence. But he's using it to create art. "It's not going to replace the creative arts," Meerdo said. "It's going to just augment them and change the way that we understand things that we could do in the sphere of creativity." Even though Meerdo sees artificial intelligence as a way to be innovative, he sees its dangers. Meerdo showed the I-Team how scammers can take a real video and use AI tools to replace a person's voice and mouth movements, making them appear to say something completely different. Advances in technology are making it easier to create deepfake videos. All a person who's familiar with AI needs to make one is a single still image and a video recording. To demonstrate this, Meerdo took a video of investigative reporter Brian New to create a deepfake of Elon Musk. These AI-generated videos are hardly perfect, but they just need to be convincing enough to deceive an unsuspecting victim. "If you are really trying to scam people, I think you can do some really bad things with this," Meerdo said. Some deepfakes are easier to spot than others; there can be signs like unnatural lip movements or odd body language. But as the technology improves, it will get harder to tell just by looking. There are a growing number of websites claiming they can detect deepfakes. Using three known deepfake videos and three authentic ones, the I-Team put five of these websites to an unscientific test: Deepware, Attestiv, DeepFake-O-Meter, Sensity and Deepfake Detector. In total, these five online tools correctly identified the tested videos nearly 75% of the time. The I-Team reached out to the companies with the results; their responses are below. Deepware Deepware, a website that's free to use, initially failed to flag two of the fake videos the I-Team tested. In an email, the company said the clips used were too short and that for the best results, videos uploaded should be between 30 seconds and one minute. Deepware correctly identified all videos that were longer. According to the company, its detection rate is considered good for the industry at 70%. The frequently asked questions section of Deepware's website states: "Deepfakes are not a solved problem yet. Our results indicate the likelihood of a specific video being a deepfake or not." Deepfake Detector Deepfake Detector, a tool that charges $16.80 per month, identified one of the fake videos as "97% natural voice." The company, which specializes in spotting AI-generated voices, said in an email that factors like background noise or music can impact results, but it has an accuracy rate of approximately 92%. In response to a question about guidance for average consumers, the company wrote: "Our tool is designed to be user-friendly. Average consumers can easily upload an audio file on our website or use our browser extension to analyze content directly. The tool will provide an analysis to help determine if a video may contain deepfake elements using probabilities, making it accessible even for those unfamiliar with AI technology." Attestiv Attestiv flagged two of the real videos as "suspicious." According to the company's CEO Nicos Vekiarides, false positives can be triggered by factors like graphics and edits. Both authentic videos flagged as "suspicious" included graphics and edits. The site offers a free service, but it also has a paid tier, where consumers can adjust settings and calibrations for more in-depth analysis. While he acknowledges that Attestiv isn't perfect, Vekiarides said that as deepfakes become harder to spot with the naked eye, these kinds of websites are needed as part of the solution. "Our tool can determine if something is suspicious, and then you can verify it with your own eyes to say, 'I do think that's suspicious,'" Vekiarides said. DeepFake-O-Meter DeepFake-O-Meter is another free tool supported by the University of Buffalo and the National Science Foundation. It identified two of the real videos as having a high percentage of being AI-generated. In an email, the creator of the open platform said a limitation of deepfake detection models is that video compression can lead to sync issues with video and audio and inconsistent mouth movements. In response to a question about how everyday users can use the tool, the company emailed: "Currently, the main result shown to users is the probability value of this sample being a generated sample across different detection models. This can be used as a reference if multiple models agree on the same answer with confidence (e.g., over 80% for AI-generated or below 20% for real video). We are currently developing a more understandable way of showing the results, as well as new models that can output comprehensive detection results." Sensity Sensity's deepfake detector correctly identified all six clips, showing a heatmap indicating where A.I. manipulation is most likely. The company offers a free trial period to use its service and told the I-Team that while it's currently tailored for private and public organizations, its future goal is to make the technology accessible to everyone.
[2]
Deepfakes of Elon Musk are contributing to billions of dollars in fraud losses in the U.S.
She first saw the ad on Facebook. And then again on TikTok. After seeing what appeared to be Elon Musk offering an investment opportunity over and over again, Heidi Swan figured it had to be true. "Looked just like Elon Musk, sounded just like Elon Musk and I thought it was him," said Swan. She contacted the company behind the pitch and opened an account for more than $10,000. The 62-year-old healthcare worker thought she was making a smart investment in cryptocurrency from a businessman and investor worth billions of dollars. But Swan would soon learn she'd been scammed by a new wave of high-tech thieves using artificial intelligence to create deepfakes. Even looking back at the videos now, knowing they're fakes, Swan still thinks they look convincing. "They still look like Elon Musk," she said. "They still sound like Elon Musk." As artificial intelligence technology evolves and becomes more accessible, these kinds of scams are becoming more common. According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year, and could reach $40 billion in the U.S. by 2027. Both the Federal Trade Commission and the Better Business Bureau have issued warnings that deepfake scams are on the rise. A study by AI firm Sensity found that Elon Musk is the most common celebrity used in deepfake scams. One likely reason is his wealth and entrepreneurship. Another reason is because of the amount of interviews he's done; the more content there is of someone online, the easier it is to create convincing deepfakes. At the University of North Texas in Denton, Professor Christopher Meerdo is also using artificial intelligence. But he's using it to create art. "It's not going to replace the creative arts," Meerdo said. "It's going to just augment them and change the way that we understand things that we could do in the sphere of creativity." Even though Meerdo sees artificial intelligence as a way to be innovative, he sees its dangers. Meerdo showed the CBS News Texas I-Team how scammers can take a real video and use AI tools to replace a person's voice and mouth movements, making them appear to say something completely different. Advances in technology are making it easier to create deepfake videos. All a person who's familiar with AI needs to make one is a single still image and a video recording. To demonstrate this, Meerdo took a video of investigative reporter Brian New to create a deepfake of Elon Musk. These AI-generated videos are hardly perfect, but they just need to be convincing enough to deceive an unsuspecting victim. "If you are really trying to scam people, I think you can do some really bad things with this," Meerdo said. Some deepfakes are easier to spot than others; there can be signs like unnatural lip movements or odd body language. But as the technology improves, it will get harder to tell just by looking. There are a growing number of websites claiming they can detect deepfakes. Using three known deepfake videos and three authentic ones, the CBS News Texas I-Team put five of these websites to an unscientific test: Deepware, Attestiv, DeepFake-O-Meter, Sensity and Deepfake Detector. In total, these five online tools correctly identified the tested videos nearly 75% of the time. The I-Team reached out to the companies with the results; their responses are below. Deepware Deepware, a website that's free to use, initially failed to flag two of the fake videos the I-Team tested. In an email, the company said the clips used were too short and that for the best results, videos uploaded should be between 30 seconds and one minute. Deepware correctly identified all videos that were longer. According to the company, its detection rate is considered good for the industry at 70%. The frequently asked questions section of Deepware's website states: "Deepfakes are not a solved problem yet. Our results indicate the likelihood of a specific video being a deepfake or not." Deepfake Detector Deepfake Detector, a tool that charges $16.80 per month, identified one of the fake videos as "97% natural voice." The company, which specializes in spotting AI-generated voices, said in an email that factors like background noise or music can impact results, but it has an accuracy rate of approximately 92%. In response to a question about guidance for average consumers, the company wrote: "Our tool is designed to be user-friendly. Average consumers can easily upload an audio file on our website or use our browser extension to analyze content directly. The tool will provide an analysis to help determine if a video may contain deepfake elements using probabilities, making it accessible even for those unfamiliar with AI technology." Attestiv Attestiv flagged two of the real videos as "suspicious." According to the company's CEO Nicos Vekiarides, false positives can be triggered by factors like graphics and edits. Both authentic videos flagged as "suspicious" included graphics and edits. The site offers a free service, but it also has a paid tier, where consumers can adjust settings and calibrations for more in-depth analysis. While he acknowledges that Attestiv isn't perfect, Vekiarides said that as deepfakes become harder to spot with the naked eye, these kinds of websites are needed as part of the solution. "Our tool can determine if something is suspicious, and then you can verify it with your own eyes to say, 'I do think that's suspicious,'" Vekiarides said. DeepFake-O-Meter DeepFake-O-Meter is another free tool supported by the University of Buffalo and the National Science Foundation. It identified two of the real videos as having a high percentage of being AI-generated. In an email, the creator of the open platform said a limitation of deepfake detection models is that video compression can lead to sync issues with video and audio and inconsistent mouth movements. In response to a question about how everyday users can use the tool, the company emailed: "Currently, the main result shown to users is the probability value of this sample being a generated sample across different detection models. This can be used as a reference if multiple models agree on the same answer with confidence (e.g., over 80% for AI-generated or below 20% for real video). We are currently developing a more understandable way of showing the results, as well as new models that can output comprehensive detection results." Sensity Sensity's deepfake detector correctly identified all six clips, showing a heatmap indicating where A.I. manipulation is most likely. The company offers a free trial period to use its service and told the I-Team that while it's currently tailored for private and public organizations, its future goal is to make the technology accessible to everyone.
Share
Copy Link
AI-generated deepfakes, particularly those impersonating Elon Musk, are contributing to a surge in fraud cases, with losses expected to reach $40 billion by 2027. As detection tools struggle to keep pace, experts warn of the growing threat to unsuspecting victims.
Artificial Intelligence (AI) technology is being exploited by scammers to create increasingly convincing deepfakes, leading to a surge in fraud cases. One recent victim, Heidi Swan, a 62-year-old healthcare worker, lost over $10,000 to a scam featuring a deepfake of Elon Musk promoting a cryptocurrency investment opportunity 12.
According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year. This figure is projected to reach a staggering $40 billion in the U.S. by 2027 12. Both the Federal Trade Commission and the Better Business Bureau have issued warnings about the rising threat of deepfake scams 12.
A study by AI firm Sensity found that Elon Musk is the most frequently impersonated celebrity in deepfake scams. His wealth, entrepreneurial status, and the abundance of his online content make him an ideal target for scammers 12. The more content available online, the easier it becomes to create convincing deepfakes.
Professor Christopher Meerdo from the University of North Texas demonstrated how scammers can manipulate real videos using AI tools to replace a person's voice and mouth movements. The process of creating a deepfake has become increasingly accessible, requiring only a single still image and a video recording 12.
As the technology improves, identifying deepfakes by eye is becoming more challenging. While some may exhibit unnatural lip movements or odd body language, others are highly convincing. In response, several websites have emerged claiming to detect deepfakes 12.
The CBS News Texas I-Team conducted an unscientific test of five deepfake detection websites: Deepware, Attestiv, DeepFake-O-Meter, Sensity, and Deepfake Detector. Using three known deepfake videos and three authentic ones, the tools correctly identified the videos nearly 75% of the time 12.
Deepware: Achieved a 70% detection rate, considered good for the industry. However, it struggled with shorter video clips 12.
Deepfake Detector: Boasts a 92% accuracy rate but can be affected by background noise or music 12.
Attestiv: Experienced false positives triggered by graphics and edits in authentic videos 12.
DeepFake-O-Meter: Misidentified two real videos as having a high probability of being AI-generated 12.
As deepfake technology continues to evolve, the battle between creators and detectors intensifies. While detection tools are not perfect, experts like Nicos Vekiarides, CEO of Attestiv, argue that they are a necessary part of the solution. As deepfakes become harder to spot with the naked eye, these tools can serve as a first line of defense, prompting users to scrutinize suspicious content more closely 12.
The Model Context Protocol (MCP) is emerging as a game-changing framework for AI integration, offering a standardized approach to connect AI agents with external tools and services. This innovation promises to streamline development processes and enhance AI capabilities across various industries.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A new study reveals that advanced AI language models, including ChatGPT and Llama, are increasingly prone to oversimplifying complex scientific findings, potentially leading to misinterpretation and misinformation in critical fields like healthcare and scientific research.
2 Sources
Science and Research
7 hrs ago
2 Sources
Science and Research
7 hrs ago
The US government is planning new export rules to limit the sale of advanced AI GPUs to Malaysia and Thailand, aiming to prevent their re-export to China and close potential trade loopholes.
3 Sources
Policy and Regulation
23 hrs ago
3 Sources
Policy and Regulation
23 hrs ago
An Xbox executive's suggestion to use AI chatbots for emotional support after layoffs backfires, highlighting tensions between AI adoption and job security in the tech industry.
7 Sources
Technology
1 day ago
7 Sources
Technology
1 day ago
An Indian software engineer, Soham Parekh, has been accused of simultaneously working for multiple Silicon Valley startups, sparking a debate on remote work ethics and hiring practices in the tech industry.
8 Sources
Startups
1 day ago
8 Sources
Startups
1 day ago