Curated by THEOUTPOST
On Wed, 16 Apr, 4:05 PM UTC
5 Sources
[1]
AI unleashes more advanced scams. Here's what to look out for (and how to stay protected)
Microsoft's Cyber Signals report identifies AI-driven deception in the workplace. Artificial intelligence (AI) makes creating new materials, such as text or images, as easy as typing a simple text prompt. Even though that capability means big productivity gains for individuals, bad actors can exploit AI to create elaborate cyber scams. Also: The best VPN services (and how to choose the right one for you) Evidence suggests cyberattacks are on the rise. Between March 2024 and March 2025, Microsoft stopped approximately $4 bn of fraud attempts. Many of those attacks were AI-enhanced. "We've seen it where a bunch of people are using AI really well to improve their lives, which is what we want, but in the hands of bad actors, they're using AI to supercharge their scams," said Kelly Bissell, CVP, Fraud and Abuse at Microsoft, to ZDNET. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses On Wednesday, Microsoft published its Cyber Signals report titled 'AI-Driven Deception: Emerging Fraud Threats and Countermeasures' to help people identify common attacks and learn what preventative measures they can take. You can find a roundup of the attacks identified in the report and tips to stay safe online below. If you have encountered any AI-generated content, whether it's an image or text, you have likely seen how realistic AI content can be. Bad actors can use this capability to create fraudulent websites that are visually indistinguishable from real ones with AI-generated product descriptions, images, and even reviews. Since this action requires no prior technical knowledge and just a small amount of time, consumers' chances of coming across these scams are higher than in the past. There are ways to stay protected, including using a browser with mitigations built-in. For example, Microsoft Edge has website typo protection and domain impersonation protection, which use deep learning to warn users about fake websites. Edge also has a Scareware Blocker, which blocks scam pages and popup screens. Microsoft also identifies proactive measures users can take, such as avoiding impulse buying, as a false sense of urgency is often simulated on fraudulent sites with countdown timers and other similar tactics, and avoiding payment mechanisms that lack fraud protections, such as direct bank transfers or cryptocurrency. Another tip is to be cautious about clicking on ads without verification. "AI for bad can actually target 'Sabrina' and what you do because of all your public information that you work on, customize an ad for you, and they can set up a website and pay for an ad within the search engine pretty easily for Sabrina or lots of Sabrinas," Bissell said as an example. Bad actors can create fake job listings in seconds using AI. To make these ads even more convincing, the actors will list them on various reliable job platforms using stolen credentials, auto-generated descriptions, and even AI-driven interviews and emails, according to the report. Microsoft suggests that job listing platforms should implement multi-factor authentication for employers so bad actors can't co-opt their listings and fraud-detection technologies to flag fraudulent content. Also: How AI agents help hackers steal your confidential data - and what to do about it Until those measures are widely adopted, users can look out for warning signs, such as an employment offer that includes a request for personal information, such as bank account or payment data under the guise of background check fees or identity verification. Other warning signs include unsolicited job offers or interview requests via text or email. Users can take a proactive step by verifying the employer and recruiter's legitimacy to crosscheck their details on LinkedIn, Glassdoor, and other official websites. "Make sure that if it sounds too good to be true, like minimal experience, where a great salary is probably too good to be true," said Bissell. These scams trick users into thinking they need technical support services for problems that do not exist through advanced social engineering ploys via text, email, and other channels. The bad actors then gain remote access to the person's computer, allowing them to view information and install malware. Even though this attack does not necessarily involve using AI, it is still highly effective at targeting victims. For example, Microsoft said Microsoft Threat Intelligence observed the ransomware-focused cybercriminal group Storm-1811 posing as IT support from legitimate organizations through voice phishing (vishing) attacks, convincing users to hand over access to their computers via Quick Assist. Similarly, Storm-1811 used Microsoft Teams to launch vishing attacks on targeted users. Also: The best VPN services for iPhone and iPad (yes, you need to use one) Microsoft said it has mitigated such attacks by "suspending identified accounts and tenants associated with inauthentic behavior." However, the company warns that unsolicited tech support offers are likely scams. The report said proactive measures users can take are opting for Remote Help instead of Quick Assist, blocking full control requests on Quick Assist, and taking advantage of digital fingerprinting capabilities. AI is evolving rapidly and its advanced capabilities can help your organization stay protected. Bissell said every company should consider implementing AI as soon as possible to stay ahead of the curve. "An important piece of advice for companies is, in this cat and mouse game, they've got to adopt AI for defensive purposes now because, if they don't, then they're going to be at a disadvantage from the attackers," said Bissell.
[2]
Microsoft warns AI is making it faster and easier to create online scams
In brief: It seems one profession that really loves generative AI is that of the cybercriminal. Microsoft warns that the technology has evolved to the point where creating an online scam can now take minutes rather than days or weeks and requires little technical knowledge. In its latest edition of the Cyber Signals report, Microsoft writes that AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools. The range of cyber scams AI can be used for is extensive. The tools can, for example, help create social engineering lures by scanning and scraping the web to build detailed profiles of employees or other targets. There are also cases of complex fraud schemes that use AI-enhanced product reviews and AI-generated storefronts, with scammers creating entire sham websites and fake e-commerce brands, complete with fabricated business histories and customer testimonials. Scammers can even use AI for customer service chatbots that can lie about unexplained charges and other anomalies. It's long been reported that advancing deepfake technology is making this a popular tool for scammers. We've seen it used to create fake celebrity endorsements, impersonate friends and family members, and, as Microsoft notes, for job interviews - both hiring and applying - conducted via video calls. The company notes that lip-syncing delays, robotic speech, or odd facial expressions are giveaway signs that the person on the other end of a video call might be a deepfake. Microsoft recommends that consumers be wary of limited-time deals, countdown timers, and suspicious reviews. They should also cross-check domain names and reviews before making purchases, and avoid using payment methods that lack fraud protections, such as direct bank transfers and cryptocurrency payments. Tech support scams are also on the rise. While AI doesn't always play a part in these incidents, tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. The Windows Quick Assist tool, which lets someone use a remote connection to view a screen or take it over to fix problems, is regularly used in these scams. As such, Microsoft is adding warnings to Quick Assist and requires users to check a box acknowledging the security implications of sharing their screen. Microsoft also recommends using Remote Help instead of Quick Assist for internal tech support. While the post focuses on the dangers of AI scams, it also notes that Microsoft continues to protect its platforms and customers from cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion worth of fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked about 1.6 million bot signup attempts per hour.
[3]
AI is helping fraudsters pump out scamming campaigns in minutes
Sophisticated scam campaigns that used to take cybercriminals weeks to set up can now pop up in minutes, Microsoft is warning. The difference? Generative Artificial Intelligence (GenAI). The tools that emerged a few years ago, including ChatGPT, Copilot, Midjourney, and many others have not just cut down on time needed to prepare a sophisticated internet scam, they have also lowered the barrier for entry allowing even rookie fraudsters to prepare and run advanced campaigns. In the latest Cyber Signals report on AI assisted scams, Microsoft said that cybercriminals are using GenAI for more than "just" phishing email copy. They create deepfakes (usually fake videos of celebrities endorsing a project), and create AI-generated "sham websites" mimicking legitimate businesses. "What used to take scammers days or weeks to create will now take minutes," Microsoft warned. But at the end of the day, this is still "just" phishing and fraud - people can mitigate the risk by taking it slow on the internet. Microsoft says the best thing to do is not to be fooled with "limited-time" deals and countdown timers, to only click on verified ads (many scam sites spread through AI-optimized social media ads), and to be skeptical of social proof (scammers can use AI-generated reviews, influencer endorsements, and testimonials to exploit trust people have in social media). Finally, users should never provide personal or financial information to people reaching out via unsolicited SMS or email messages. "You should never provide a Social Security number, banking details, or passwords to an unverified employer," it said. Paying for a job opportunity is almost always a red flag. The best advice is usually the simplest: if it sounds too good to be true, it probably is. Artificial intelligence will make internet fraud even more dangerous, but with a little common sense and a little care - it doesn't have to be successful.
[4]
AI is making online shopping scams harder to spot
What used to take days now takes a scammer only minutes to create. A new report from Microsoft highlights the scale of the problem. The company says it took down almost 500 malicious web domains last year and stopped approximately 1.6 million bot signup attempts every hour. "Last year we were tracking 300 unique nation-state and financial crime groups. This year, we're tracking 1,500," Vasu Jakkal, corporate vice president of Microsoft Security told CBS News Confirmed. The company attributes much of the rise in this type of crime to generative AI which has streamlined the process to make a website. "You can just buy a kit off the web," Jakkal explained. "It's an assembly line. Someone builds the malware. Someone builds the infrastructure. Someone hosts the website." Jakkal explained that AI isn't just helping set up fraudulent sites, it also helps make them more believable. She said scammers use generative AI to create product descriptions, images, reviews and even influencer videos as part of a social engineering strategy to dupe shoppers into believing they're scrolling through a legitimate business, when in reality they're being lured into a digital trap. Another tactic outlined in Microsoft's report is domain impersonation. Jakkal said scammers make a near-perfect copy of a legitimate website's address, sometimes changing just a single letter, to trick consumers into giving up money and information. As well as raising awareness of these scams, the company is introducing new tools to help safeguard their customers. Microsoft's web browser, Microsoft Edge, now features typo and domain impersonation protection which prompts users to check the website's URL if the program suspects there may be a misspelling. The browser also uses machine learning to block potentially malicious sites before consumers reach the homepage. "We're trying to combat at every place where we see there's a potential of someone being vulnerable to a fraud attempt," Jakkal said. The idea is to put checks and balances in place so people are able to pause and reevaluate, he said. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, commended Microsoft for being one of the most proactive companies in fraud prevention, but said more action needed to come from both the private and public sector. "Having the backing of big tech as part of this kind of public, private partnership would be a really great way to show that they do take it seriously." No matter where you're browsing, CBS News Confirmed compiled some tips to spot sham sites.
[5]
How AI is supercharging cyber criminals' kits
AI is lowering the technical bar for cybercriminals, making it easier and cheaper to generate content for attacks. Software used in fraud attempts are increasing at a rapid pace, from legitimate apps being misused for illegal purposes, to fraud-oriented tools becoming available to bad actors.As artificial intelligence (AI) advances, it's not only driving innovation and productivity, but also inadvertently powering cybercriminals and malicious actors. Tech giant Microsoft said in its latest Cyber Signals report that it thwarted fraud attempts worth $4 billion on its platforms Azure, Edge and others, and blocked about 1.6 million bot signup attempts per hour between April 2024 and April 2025. AI is lowering the technical bar for cybercriminals, making it easier and cheaper to generate content for attacks. Software used in fraud attempts are increasing at a rapid pace, from legitimate apps being misused for illegal purposes, to fraud-oriented tools becoming available to bad actors. As per Microsoft, AI-powered frauds are ocurring globally, with most activity coming from China and Europe, especially Germany. How is AI powering frauds -AI tools enable cybercriminals to scan and harvest corporate data online, helping them build detailed profiles and craft convincing social engineering traps. -Bad actors are also creating fraud schemes like AI-enhanced produt reviews and AI-generated storefronts, creating real-looking websites or ecommerce brands. These also have a detailed, fake history and made-up customer reviews. -Deepfakes, voice cloning, phishing emails, fake websites and others are threats being run at a wider scale, run in a large part by AI. The ecommerce deception Fraud online shopping websites can be created within minutes using AI, which previously took weeks. AI creates product webpages, descriptions, customer reviews, etc. AI-powered customer chatbots, which are used by nearly all ecommerce websites and apps now, are also created. These bots delay chargebacks by stalling customers with scripted excuses. Jobs fraud Generative AI is key for scammers to create fake job listings on various platforms. The technology generates fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. These also seem more credible with AI-powered interviews and automated emails. How to protect yourself? Microsoft said in its report that customers must be cautious of impulse online shopping, unverified advertisements and links, avoiding cryptocurrency and other suspicious payment methods. Also Read: Will act with alacrity on cyber fraud cases: Police assure Bombay HC
Share
Share
Copy Link
Microsoft's latest Cyber Signals report highlights the growing threat of AI-enhanced scams, detailing how artificial intelligence is making it easier for cybercriminals to create sophisticated fraud schemes.
Microsoft's latest Cyber Signals report has revealed a concerning trend in the cybersecurity landscape: artificial intelligence (AI) is significantly enhancing the capabilities of cybercriminals, making it easier and faster to create sophisticated online scams 1. The report, titled "AI-Driven Deception: Emerging Fraud Threats and Countermeasures," highlights how AI is lowering the technical bar for fraud and cybercrime actors, allowing them to create convincing scams in minutes rather than days or weeks 2.
Between April 2024 and April 2025, Microsoft reported stopping approximately $4 billion worth of fraud attempts on its platforms, including Azure and Edge 5. The company also blocked about 1.6 million bot signup attempts per hour during this period 4. Kelly Bissell, CVP of Fraud and Abuse at Microsoft, noted that while AI has improved many people's lives, it has also been exploited by bad actors to "supercharge their scams" 1.
Fraudulent Websites: AI enables the creation of visually indistinguishable fake websites with AI-generated product descriptions, images, and reviews 1.
Fake Job Listings: Bad actors use AI to generate convincing job postings, complete with auto-generated descriptions and even AI-driven interviews and emails 1.
Tech Support Scams: While not always AI-driven, these scams have become more sophisticated, often involving voice phishing (vishing) attacks 1.
Deepfakes and Social Engineering: AI is used to create convincing deepfakes for celebrity endorsements and to impersonate friends and family members 2.
Domain Impersonation: Scammers create near-perfect copies of legitimate website addresses, changing just a single letter to trick consumers 4.
To combat these evolving threats, Microsoft and cybersecurity experts recommend several protective measures:
Use browsers with built-in mitigations, such as Microsoft Edge's website typo protection and domain impersonation protection 1.
Avoid impulse buying and be cautious of limited-time deals and countdown timers 2.
Verify the legitimacy of employers and recruiters when job hunting 1.
Be wary of unsolicited tech support offers 1.
Cross-check domain names and reviews before making purchases 2.
Avoid payment methods that lack fraud protections, such as direct bank transfers and cryptocurrency 2.
While AI poses significant challenges in the cybersecurity landscape, it also offers potential solutions. Bissell emphasizes the importance of companies adopting AI for defensive purposes to stay ahead of attackers 1. Microsoft is introducing new AI-powered tools to safeguard customers, such as typo and domain impersonation protection in Microsoft Edge 4.
As AI continues to evolve, both the public and private sectors must work together to combat these emerging threats. Scott Shackelford, executive director at the Center for Applied Cybersecurity Research at Indiana University, suggests that a public-private partnership with big tech companies would be an effective way to address these challenges 4.
Reference
[5]
As AI enhances the sophistication of financial scams, cybersecurity experts are fighting back with AI-driven defenses and education. The article explores the latest trends in AI-powered fraud and provides insights on how individuals and businesses can protect themselves.
2 Sources
2 Sources
The FBI has issued an alert about the increasing use of generative AI by criminals to enhance fraud schemes, urging the public to adopt new protective measures against these sophisticated threats.
4 Sources
4 Sources
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
AI-generated phishing emails are becoming increasingly sophisticated, targeting executives and individuals with hyper-personalized content. This new wave of cyber attacks poses significant challenges for email security systems and users alike.
9 Sources
9 Sources
A sophisticated AI-based scam targeting Gmail users combines spoofed phone numbers, fake emails, and AI-generated voices to trick victims into revealing their account credentials.
11 Sources
11 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved