The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 21 Aug, 8:02 AM UTC
4 Sources
[1]
McAfee's Deepfake Detector keeps it real for Lenovo AI PCs
New AI tool will spot the deepfakes before they make a fool out of you Deepfake videos are an impressive demonstration of how AI can mimic real people, but the technology is too often leveraged to trick people into thinking they are seeing and hearing a real person, especially celebrities. It's a cybersecurity issue, which is why McAfee has partnered with Lenovo to bring its Deepfake Detector exclusively to Lenovo AI PCs. As the name suggests, McAfee's Deepfake Detector spots and flags deepfake videos, the kind that have scammed people out of their money, more than half a million dollars in some extreme cases. McAfee has been developing ways of limiting deepfake-fueled scams for a few years, but the Deepfake Detector takes that effort to a new level. The tool was trained on around 200,000 video samples to teach it how to accurately identify audio generated or altered with AI. It runs in the background like most anti-virus software, scanning video content playing both online and locally. Should it note a deepfake, it alerts the user and lets them decide what to do about it. "Knowledge is power, and this has never been truer than in the AI-driven world we're living in today," McAfee Senior Vice President of Product Roma Majumder said. "No more wondering, is this Warren Buffet investment scheme legitimate, does Taylor Swift really want to give away cookware to fans, or did a politician actually say these words? The answers are provided to you automatically and within seconds with McAfee Deepfake Detector." The Deepfake Detector is specifically designed to operate on select Lenovo AI PCs because they are built with a neural processing unit (NPU) that enhances on-device AI capabilities. The computer can monitor and tag a video as a deepfake without needing to upload data to the cloud. The whole analysis stays on the device, which is a boon for the more privacy-minded PC user. The Deepfake Detector is now available in select Lenovo AI PCs in the U.S., UK, and Australia. A new purchase of a Lenovo AI PC comes with a free 30-day trial, with subscriptions starting at $10 a year afterward. Those interested in buying a Lenovo AI PC with the Deepfake Detector may have to check who built the NPU, however. Earlier this year, McAfee announced that the Deepfake Detector would be exclusive to certain Intel chips with an NPU, but it's not clear if the Lenovo exclusivity deal removes that limit. Lenovo's AI PC portfolio includes computers with NPUs built by chipmakers besides Intel, such as the Qualcomm Snapdragon X Elite. We've reached out to McAfee and Lenovo to find out and will update you when we learn more. "At McAfee, we're inspired by the transformative potential of AI and are committed to helping shape a future where AI is used for good. Teaming up with Lenovo boosts our ability to deliver the most effective, automated, AI-powered deepfake detection, offering people a powerful digital guardian on their PCs," Majumder said. Even if you don't have the right kind of Lenovo, you can still check if a video is a deepfake with the new McAfee Smart AI Hub at McAfee.ai. The website is designed to educate consumers about AI-driven scams, including deepfakes. But it's not just reading material. Visitors can submit videos for analysis to find out if it's a deepfake scam, and McAfee will use it to improve the educational content further and adapt its defensive software.
[2]
McAfee Rolls Out Deepfake Detector in Lenovo's New Copilot-Plus PCs
Expertise Cybersecurity, Digital Privacy, IoT, Consumer Tech, Running and Fitness Tech, Smartphones, Wearables Deepfakes have come a long way in just the past year, making it increasingly difficult for the average person to figure out what's real and what's not. Thanks to the power of artificial intelligence, cybercriminals and others can now produce surprisingly convincing audio, video and still-photo deepfakes faster and easier than ever before. And experts say those creations could be used to do everything from scam consumers out of their money to swing public opinion ahead of an election. While much of that problem stems from the rise of AI, security software company McAfee says the solution could lie in AI, too. The McAfee Deepfake Detector, announced Wednesday, will alert consumers to possible deepfakes they might encounter while browsing the internet or viewing posts in their social media feeds. The tool will be rolled out on Lenovo's Copilot-Plus PCs starting today. Just like the antivirus software McAfee is known for, it will be a paid service. Consumers who opt into using it will get a free 30-day trial. After that, plans start at $9.99 for the first year. While the detector will be powered by AI, all processing will be done on device, both to ensure privacy and prevent latency issues that can occur when data is sent to the cloud, says Steve Grobman, McAfee's executive vice president and chief technology officer. Consumer data will not be collected and updates to the tool's AI will come from the company's research team, he says. "We really wanted to arm our customers with a set of tools to help them identify whether something is potentially AI generated," Grobman says. "But we also wanted to be very mindful of things like privacy and user experience." Also, like antivirus, the detector is designed to run quietly in the background. But if it detects something questionable while the user is browsing websites or watching videos on their social media feeds, there will be a pop-up notification. Users can then choose whether to ignore it or click on it for more information. Right now, the tool only analyzes a video's audio, so the detector won't kick in if someone is just scrolling through their Instagram or Twitter feeds without the sound on. And it can't tell if photos are deepfakes. The tool's capabilities will eventually be expanded to do those things, Grobman says. But McAfee decided to start with audio, because while many deepfakes use real videos, they almost always have faked audio. And while the detector is only available on the new Lenovo computers for now, Grobman says it's compatible with certain kinds of both Intel and Qualcomm processors, which could allow for future availability on other kinds of PCs and mobile devices. The idea is to educate people about what they're seeing. As part of that effort, McAfee is also launching its Smart AI Hub, a website where consumers can go to learn all about AI and deepfake technology. While you might think getting consumers to pay $10 a year for something like this might be a tough sell, Grobman argues that it's not a whole lot different than paying for software to keep your computer free of viruses. He notes that deepfakes are being increasingly used in online scams, pointing to a now-famous deepfake video that made it look like megastar Taylor Swift was endorsing a $10 deal on Le Creuset skillets. In that kind of scam, there's no malware for security software to detect, and there's nothing to stop those who fall for it from sending off their money, cryptocurrency or personal information. "But if we can help warn a user that, 'Hey, this is likely AI generated,' a light bulb might go on for some of them," he says. "Maybe they'll save their 10 bucks."
[3]
McAfee unleashes AI deepfake audio detector - but how reliable can it be?
Altered audio can signal a scam, and Deepfake Detector promises to find them. Here are the PCs it works on and what it will cost you. Artificial intelligence (AI)-generated deepfakes are all over the internet, fueling everything from scams to political misinformation. Cybersecurity provider McAfee is now the latest entity to attempt a solution -- but it's not for everyone, and it may not be a catch-all. On Wednesday, cybersecurity provider McAfee launched its Deepfake Detector, an AI-powered tool that scans videos for signs of AI-generated audio. According to the release, once a user opts in, the detector automatically monitors what's playing on a PC and alerts the user "within seconds" if it finds AI-altered audio -- all without the use of third-party software or the need for user-initiated manual checks. Also: AI phone scams sound scary real. Do these 5 things to protect yourself and your family Altered audio could indicate a deepfake scam, including those that clone the voices of loved ones to trick people into paying fake ransoms or impersonate public figures to spread false information. McAfee's release notes that the proliferation of AI has enabled cybercriminals to create "more convincing, personalized, AI-generated scams at scale." "While not all AI content is created with malicious intent, the ability to know if a video is real or fake helps consumers make smart and well-informed decisions," McAfee's announcement states. "Deepfake Detector only analyzes audio; more specifically, it scans audio on videos playing on the browser and does not scan things that involve digital rights management, like the content you'd find on a video streaming service like Netflix or Disney+," a McAfee spokesperson told ZDNET. Also: Most people worry about deepfakes - and overestimate their ability to spot them The detector does not monitor any video or audio playing outside of the PC. "Our threat research found that most deepfake videos used AI-manipulated or AI-generated audio as the primary way to mislead viewers. The visual elements of the videos are typically tightly cut, genuine footage to distract the viewers from the suspicious message," the spokesperson continued. McAfee created Deepfake Detector in partnership with Lenovo specifically for its Copilot+ PCs, meaning it is only available on those devices, and will eventually cost at least $10 per year. Using McAfee's Neural Processing Unit (NPU), the company's AI detection models run inference, or the end-to-end content identification process, entirely on the PC. This kind of on-device AI reinforces user privacy by keeping personal data out of the cloud - an approach Apple is taking with Apple Intelligence, rolling out to devices this fall. Of course, the detector must surveil a user's laptop to work - but McAfee assures this won't infringe on privacy. "McAfee does not collect or record a user's audio in any way, and the user is always in control and can turn audio detection on or off as desired," the company explained in the release. McAfee also explained that the benefits of running Deepfake Detector on-device include faster processing speed and improved battery life. Also: 80% of people think deepfakes will impact elections. Here are three ways you can prepare But does AI audio detection really work? "No more wondering, is this an actual celebrity giveaway, is this investment offer legitimate, did this politician actually say these words?" said Roma Majumder, McAfee's senior vice president of product, in the release. That's a high expectation to set. Putting this kind of technology -- however early-stage -- directly onto devices better prepares consumers for a future full of AI-generated content. However, as ZDNET's own reporting has found, the accuracy and efficacy of detection technology have been unreliable, especially considering the pace at which generative AI is evolving. "The technology uses a model trained across nearly 200,000 samples and gives consumers advanced AI detection, with a 96% accuracy rate," the McAfee spokesperson explained of the detector. Also: More political deepfakes exist than you think, according to this AI expert Alongside Deepfake Detector, McAfee also released the Smart AI Hub, an educational cybersecurity resource for consumers aiming to "build awareness of deepfakes and AI-driven scams," according to the announcement. Users can submit suspicious-looking videos to the hub for McAfee to analyze. The company says it plans to share the resulting insights as a resource with the public. The Deepfake Detector is available today in English for all new Lenovo Copilot+ PCs ordered from Lenovo's website and select retailers in the US. Once they've purchased the PC, a customer can use the detector for free for 30 days, after which pricing begins at $10 for the first year. The feature will be available in Australia and the UK later in 2024.
[4]
New McAfee tool can detect AI-generated audio
Why it matters: AI companies have released a growing array of tools to generate content, but products that can tell you with any reliability whether content was made with AI remain rare. How it works: The McAfee Deepfake Detector, as the new software is known, focuses on detecting AI-generated audio within almost any audio or video stream available on a PC. What they're saying: "The barrier to create AI generated content has come way down and consumers don't really have great tools to know whether what they're looking at is potentially generated with AI," McAfee chief technology officer Steve Grobman told Axios. Yes, but: Deepfake Detector does its work not in the cloud but on a user's computer. Between the lines: While the work Deepfake Detector performs may not be the most important to run locally versus in the cloud, Grobman said McAfee is also looking ahead toward other kinds of AI scans that might be more sensitive. What's next: The McAfee tool will be available exclusively on Lenovo Copilot+ PCs through mid-September. Lenovo says it expects to make the tool available on other devices, hhas been in talks with other PC makers,and also has a collaboration with Intel.
Share
Share
Copy Link
McAfee introduces Project Mockingbird, a deepfake detection tool, for Lenovo's new AI PCs. The technology aims to combat the rising threat of AI-generated audio and video content.
In a significant move to combat the growing threat of AI-generated deceptive content, cybersecurity giant McAfee has unveiled Project Mockingbird, a cutting-edge deepfake detection tool. This technology is set to be integrated into Lenovo's new line of AI-powered PCs, marking a crucial step in the fight against digital misinformation 1.
Project Mockingbird utilizes advanced AI algorithms to analyze audio content and determine whether it has been artificially generated or manipulated. The tool is designed to detect subtle inconsistencies and patterns that are typically present in AI-generated audio, which human ears might miss 2.
Lenovo's upcoming Copilot Plus PCs will be the first to feature McAfee's deepfake detection technology. These AI-enhanced computers are expected to provide users with an additional layer of security against sophisticated digital threats. The integration aims to empower users to verify the authenticity of audio content they encounter online 1.
The launch of Project Mockingbird comes at a critical time when deepfake technology is becoming increasingly sophisticated and accessible. McAfee reports that 84% of IT professionals believe deepfakes pose a significant threat to their organizations. Moreover, 61% of consumers express concern about their inability to distinguish between real and AI-generated content 3.
While Project Mockingbird represents a significant advancement in deepfake detection, experts caution that no technology is infallible. The rapidly evolving nature of AI-generated content means that detection tools must continually adapt to new techniques. McAfee acknowledges these challenges and emphasizes the importance of ongoing research and development in this field 4.
The introduction of Project Mockingbird signals a shift in the cybersecurity landscape, where AI is being used to combat AI-generated threats. This development raises important questions about the future of digital trust and the role of technology in verifying online content. As deepfake technology continues to advance, the race between creation and detection tools is likely to intensify, shaping the future of digital security and information integrity 3.
Hiya, a call screening and fraud detection company, has released a free Chrome extension called Hiya Deepfake Voice Detector to identify AI-generated voices in audio and video content, aiming to combat misinformation ahead of the 2024 US elections.
4 Sources
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
Trend Micro introduces a comprehensive cybersecurity solution to protect consumers and enterprises against AI-generated threats, including deepfakes. The new suite combines advanced technologies to detect and prevent sophisticated cyberattacks.
4 Sources
YouTube is creating new tools to identify AI-generated content, including deepfake voices and faces. This move aims to protect creators and maintain trust on the platform amid growing concerns about AI-generated misinformation.
4 Sources
Bitdefender introduces Scam Copilot, an AI-driven tool designed to protect users against increasingly sophisticated online scams and fraud, particularly those powered by large language models.
2 Sources