Curated by THEOUTPOST
On Wed, 16 Apr, 4:05 PM UTC
2 Sources
[1]
AI unleashes more advanced scams. Here's what to look out for (and how to stay protected)
Microsoft's Cyber Signals report identifies AI-driven deception in the workplace. Artificial intelligence (AI) makes creating new materials, such as text or images, as easy as typing a simple text prompt. Even though that capability means big productivity gains for individuals, bad actors can exploit AI to create elaborate cyber scams. Also: The best VPN services (and how to choose the right one for you) Evidence suggests cyberattacks are on the rise. Between March 2024 and March 2025, Microsoft stopped approximately $4 bn of fraud attempts. Many of those attacks were AI-enhanced. "We've seen it where a bunch of people are using AI really well to improve their lives, which is what we want, but in the hands of bad actors, they're using AI to supercharge their scams," said Kelly Bissell, CVP, Fraud and Abuse at Microsoft, to ZDNET. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses On Wednesday, Microsoft published its Cyber Signals report titled 'AI-Driven Deception: Emerging Fraud Threats and Countermeasures' to help people identify common attacks and learn what preventative measures they can take. You can find a roundup of the attacks identified in the report and tips to stay safe online below. If you have encountered any AI-generated content, whether it's an image or text, you have likely seen how realistic AI content can be. Bad actors can use this capability to create fraudulent websites that are visually indistinguishable from real ones with AI-generated product descriptions, images, and even reviews. Since this action requires no prior technical knowledge and just a small amount of time, consumers' chances of coming across these scams are higher than in the past. There are ways to stay protected, including using a browser with mitigations built-in. For example, Microsoft Edge has website typo protection and domain impersonation protection, which use deep learning to warn users about fake websites. Edge also has a Scareware Blocker, which blocks scam pages and popup screens. Microsoft also identifies proactive measures users can take, such as avoiding impulse buying, as a false sense of urgency is often simulated on fraudulent sites with countdown timers and other similar tactics, and avoiding payment mechanisms that lack fraud protections, such as direct bank transfers or cryptocurrency. Another tip is to be cautious about clicking on ads without verification. "AI for bad can actually target 'Sabrina' and what you do because of all your public information that you work on, customize an ad for you, and they can set up a website and pay for an ad within the search engine pretty easily for Sabrina or lots of Sabrinas," Bissell said as an example. Bad actors can create fake job listings in seconds using AI. To make these ads even more convincing, the actors will list them on various reliable job platforms using stolen credentials, auto-generated descriptions, and even AI-driven interviews and emails, according to the report. Microsoft suggests that job listing platforms should implement multi-factor authentication for employers so bad actors can't co-opt their listings and fraud-detection technologies to flag fraudulent content. Also: How AI agents help hackers steal your confidential data - and what to do about it Until those measures are widely adopted, users can look out for warning signs, such as an employment offer that includes a request for personal information, such as bank account or payment data under the guise of background check fees or identity verification. Other warning signs include unsolicited job offers or interview requests via text or email. Users can take a proactive step by verifying the employer and recruiter's legitimacy to crosscheck their details on LinkedIn, Glassdoor, and other official websites. "Make sure that if it sounds too good to be true, like minimal experience, where a great salary is probably too good to be true," said Bissell. These scams trick users into thinking they need technical support services for problems that do not exist through advanced social engineering ploys via text, email, and other channels. The bad actors then gain remote access to the person's computer, allowing them to view information and install malware. Even though this attack does not necessarily involve using AI, it is still highly effective at targeting victims. For example, Microsoft said Microsoft Threat Intelligence observed the ransomware-focused cybercriminal group Storm-1811 posing as IT support from legitimate organizations through voice phishing (vishing) attacks, convincing users to hand over access to their computers via Quick Assist. Similarly, Storm-1811 used Microsoft Teams to launch vishing attacks on targeted users. Also: The best VPN services for iPhone and iPad (yes, you need to use one) Microsoft said it has mitigated such attacks by "suspending identified accounts and tenants associated with inauthentic behavior." However, the company warns that unsolicited tech support offers are likely scams. The report said proactive measures users can take are opting for Remote Help instead of Quick Assist, blocking full control requests on Quick Assist, and taking advantage of digital fingerprinting capabilities. AI is evolving rapidly and its advanced capabilities can help your organization stay protected. Bissell said every company should consider implementing AI as soon as possible to stay ahead of the curve. "An important piece of advice for companies is, in this cat and mouse game, they've got to adopt AI for defensive purposes now because, if they don't, then they're going to be at a disadvantage from the attackers," said Bissell.
[2]
AI is making online shopping scams harder to spot
What used to take days now takes a scammer only minutes to create. A new report from Microsoft highlights the scale of the problem. The company says it took down almost 500 malicious web domains last year and stopped approximately 1.6 million bot signup attempts every hour. "Last year we were tracking 300 unique nation-state and financial crime groups. This year, we're tracking 1,500," Vasu Jakkal, corporate vice president of Microsoft Security told CBS News Confirmed. The company attributes much of the rise in this type of crime to generative AI which has streamlined the process to make a website. "You can just buy a kit off the web," Jakkal explained. "It's an assembly line. Someone builds the malware. Someone builds the infrastructure. Someone hosts the website." Jakkal explained that AI isn't just helping set up fraudulent sites, it also helps make them more believable. She said scammers use generative AI to create product descriptions, images, reviews and even influencer videos as part of a social engineering strategy to dupe shoppers into believing they're scrolling through a legitimate business, when in reality they're being lured into a digital trap. Another tactic outlined in Microsoft's report is domain impersonation. Jakkal said scammers make a near-perfect copy of a legitimate website's address, sometimes changing just a single letter, to trick consumers into giving up money and information. As well as raising awareness of these scams, the company is introducing new tools to help safeguard their customers. Microsoft's web browser, Microsoft Edge, now features typo and domain impersonation protection which prompts users to check the website's URL if the program suspects there may be a misspelling. The browser also uses machine learning to block potentially malicious sites before consumers reach the homepage. "We're trying to combat at every place where we see there's a potential of someone being vulnerable to a fraud attempt," Jakkal said. The idea is to put checks and balances in place so people are able to pause and reevaluate, he said. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, commended Microsoft for being one of the most proactive companies in fraud prevention, but said more action needed to come from both the private and public sector. "Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously." No matter where you're browsing, CBS News Confirmed compiled some tips to spot sham sites.
Share
Share
Copy Link
Microsoft's latest Cyber Signals report highlights the increasing use of AI in online scams, detailing new threats and offering guidance on how individuals and businesses can protect themselves.
Microsoft's recent Cyber Signals report, titled 'AI-Driven Deception: Emerging Fraud Threats and Countermeasures,' reveals a significant increase in AI-powered cyber scams. Between March 2024 and March 2025, Microsoft thwarted approximately $4 billion worth of fraud attempts, many of which were AI-enhanced 1.
Kelly Bissell, CVP of Fraud and Abuse at Microsoft, stated, "We've seen it where a bunch of people are using AI really well to improve their lives, which is what we want, but in the hands of bad actors, they're using AI to supercharge their scams" 1.
The report identifies several types of AI-enhanced scams:
Fraudulent Websites: AI is being used to create visually indistinguishable fake websites with AI-generated product descriptions, images, and reviews 1.
Fake Job Listings: Bad actors can create convincing job ads in seconds, complete with auto-generated descriptions and AI-driven interviews 1.
Tech Support Scams: While not necessarily AI-driven, these scams use advanced social engineering tactics to trick users into believing they need technical support for non-existent problems 1.
Microsoft reports taking down almost 500 malicious web domains last year and stopping approximately 1.6 million bot signup attempts every hour. The company is now tracking 1,500 unique nation-state and financial crime groups, up from 300 in the previous year 2.
Microsoft and cybersecurity experts recommend several strategies to combat these threats:
Use browsers with built-in protections: Microsoft Edge offers website typo protection, domain impersonation protection, and a Scareware Blocker 1.
Avoid impulse buying and be cautious of urgency tactics 1.
Verify job offers and recruiters' legitimacy through official channels 1.
Be wary of unsolicited tech support offers 1.
Implement AI for defensive purposes in organizations 1.
Experts emphasize the need for both private and public sector involvement in combating AI-powered scams. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, suggests that "Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously" 2.
As AI continues to evolve rapidly, it's crucial for individuals and organizations to stay informed about emerging threats and adopt proactive measures to protect themselves in the digital landscape.
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
The FBI has issued an alert about the increasing use of generative AI by criminals to enhance fraud schemes, urging the public to adopt new protective measures against these sophisticated threats.
4 Sources
4 Sources
AI-generated phishing emails are becoming increasingly sophisticated, targeting executives and individuals with hyper-personalized content. This new wave of cyber attacks poses significant challenges for email security systems and users alike.
9 Sources
9 Sources
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
2 Sources
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved