Curated by THEOUTPOST
On Thu, 2 Jan, 4:01 PM UTC
9 Sources
[1]
AI written personalized emails from your loved ones could be a big scam; Gmail, Outlook, and Apple Mail do not have adequate defenses to stop this, here's why
Recent reports have revealed that Artificial Intelligence (AI) written personalized emails have eventually become a major tool for exploiting and are becoming increasingly difficult to recognize as fraudulent. Adding on to that, Apple Mail, Outlook and Gmail have also asserted that there are not enough defenses to stop what is going on. Artificial Intelligence (AI) is eventually becoming increasingly exploited by cybercriminals to actually create some highly personalized phishing emails that are difficult for the victims to recognize as fraudulent, reported New York Post. According to reports from New York Post, AI tools actually analyze social media activity to eventually gather information about potential targets while allowing scammers to craft messages that appear to come from trusted sources which includes family or friends. This trend has eventually raised severe concerns among cybersecurity experts who note that traditional email defenses are extremely inadequate against these sophisticated attacks. CISO of Beazly Kristy Kelly put an emphasis on the growing personal nature of these scams while McAfee also warned that the frequency and sophistication of AI driven phishing attacks are expected to rise exponentially, asserted New York Post. Phishing still remains a pretty primary method for initiating cyber breaches with over 90% of successful attacks starting with such messages. Eminent experts have recommended that users should remain excessively vigilant by avoiding clicking on links in unsolicited emails while they should also enhance account security through two factor authentication, noted New York Post. The rise of generative AI in crafting convincing phishing emails actually now poses a serious challenge while these tools are producing flawless text and mimic legitimate communications. Artificial Intelligence (AI) is eventually becoming increasingly exploited by cybercriminals to actually create some highly personalized phishing emails that are difficult for the victims to recognize as fraudulent. Eminent experts have recommended that users should remain excessively vigilant by avoiding clicking on links in unsolicited emails while they should also enhance account security through two factor authentication.
[2]
Gmail, Outlook and Apple users urged to watch out for this new email...
AI tools are being maliciously used to send "hyper-personalized emails" that are so sophisticated victims can't identify that they're fraudulent. According to the Financial Times, AI bots are compiling information about unsuspecting email users by analyzing their "social media activity to determine what topics they may be most likely to respond to." Scam emails are subsequently sent to the users that appear as if they're composed by family and friends. Because of the personal nature of the email, the recipient is unable to identify that it is actually nefarious. "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," Kristy Kelly, the chief information security officer at the insurance agency Beazley, told the outlet. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." "AI is giving cybercriminals the ability to easily create more personalized and convincing emails and messages that look like they're from trusted sources," security company McAfee recently warned. "These types of attacks are expected to grow in sophistication and frequency." While many savvy internet users now know the telltale signs of traditional email scams, it's much harder to tell when these new personalized messages are fraudulent. Gmail, Outlook, and Apple Mail do not yet have adequate "defenses in place to stop this," Forbes reports. "Social engineering," ESET cybersecurity advisor Jake Moore told Forbes "has an impressive hold over people due to human interaction but now as AI can apply the same tactics from a technological perspective, it is becoming harder to mitigate unless people really start to think about reducing what they post online." Bad actors are also able to utilize AI to write convincing phishing emails that mimic banks, accounts and more. According to data from the US Cybersecurity and Infrastructure Security Agency and cited by the Financial Times, over 90% of successful breaches start with phishing messages. These highly sophisticated scams can bypass the security measures, and inbox filters meant to screen emails for scams could be unable to identify them, Nadezda Demidova, cybercrime security researcher at eBay, told The Financial Times. "The availability of generative AI tools lowers the entry threshold for advanced cybercrime," Demidova said. McAfee warned that 2025 would usher in a wave of advanced AI used to "craft increasingly sophisticated and personalized cyber scams," according to a recent blog post. Software company Check Point issued a similar prediction for the new year. "In 2025, AI will drive both attacks and protections," Dr. Dorit Dor, the company's chief technology officer, said in a statement. "Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns." To protect themselves, users should never click on links within emails unless they can verify the legitimacy of the sender. Experts also recommend bolstering account security with two-factor authentication and strong passwords or passkeys. "Ultimately," Moore told Forbes, "whether AI has enhanced an attack or not, we need to remind people about these increasingly more sophisticated attacks and how to think twice before transferring money or divulging personal information when requested -- however believable the request may seem."
[3]
AI-generated phishing emails are getting very good at targeting executives
Corporate executives are being hit with an influx of hyper-personalized phishing scams generated by artificial intelligence bots, as the fast-developing technology makes advanced cyber crime easier. Leading companies such as British insurer Beazley and ecommerce group eBay have warned of the rise of fraudulent emails containing personal details probably obtained through AI analysis of online profiles. "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," said Beazley's chief information security officer Kirsty Kelly. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." Cyber security experts said the increasing attacks come during a period of rapid advancement for AI technology, as tech companies race to create ever more sophisticated systems and launch popular products for consumers and businesses. AI bots can quickly ingest large quantities of data about the tone and style of a company or individual and replicate these features to craft a convincing scam. They can also scrape a victim's online presence and social media activity to determine what topics they may be most likely to respond to -- helping hackers generate bespoke phishing scams at scale. "The availability of generative AI tools lowers the entry threshold for advanced cyber crime," said eBay cyber crime security researcher Nadezda Demidova. "We've witnessed a growth in the volume of all kinds of cyber attacks," particularly in "polished and closely targeted" phishing scams, she added.
[4]
AI-generated phishing scams target corporate executives
Corporate executives are being hit with an influx of hyper-personalised phishing scams generated by artificial intelligence bots, as the fast-developing technology makes advanced cyber crime easier. Leading companies such as British insurer Beazley and ecommerce group eBay have warned of the rise of fraudulent emails containing personal details probably obtained through AI analysis of online profiles. "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," said Beazley's chief information security officer Kirsty Kelly. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." Cyber security experts said the increasing attacks come during a period of rapid advancement for AI technology, as tech companies race to create ever more sophisticated systems and launch popular products for consumers and businesses. AI bots can quickly ingest large quantities of data about the tone and style of a company or individual and replicate these features to craft a convincing scam. They can also scrape a victim's online presence and social media activity to determine what topics they may be most likely to respond to -- helping hackers generate bespoke phishing scams at scale. "The availability of generative AI tools lowers the entry threshold for advanced cyber crime," said eBay cyber crime security researcher Nadezda Demidova. "We've witnessed a growth in the volume of all kinds of cyber attacks", particularly in "polished and closely targeted" phishing scams, she added. Kip Meintzer, an executive at security company Check Point Software Technologies, told a recent investor conference that AI had given hackers "the ability to write a perfect phishing email". More than 90 per cent of successful cyber attacks begin with a phishing email, according to the US Cybersecurity and Infrastructure Security Agency. As these attacks become more sophisticated, their consequences have become increasingly expensive, with the global average cost of a data breach rising nearly 10 per cent to $4.9mn in 2024, according to IBM. Researchers have warned that AI is particularly effective for crafting business email compromise scams -- a specific type of malware-free phishing where fraudsters trick recipients into transferring funds or divulging confidential company information. This kind of scam has cost victims worldwide more than $50bn since 2013, according to the FBI. AI is "being used to scan everything to see where there's a vulnerability, whether that's in code or in the human chain", said Sean Joyce, global cyber security lead at PwC. Phishing scams generated using AI may also be more likely to bypass companies' email filters and cyber security training. Basic filters, which generally block repeated bulk phishing campaigns, may struggle to track these scams if AI is used to rapidly generate thousands of reworded messages, said eBay's Demidova.
[5]
Look out for hyper-personalized phishing attacks, powered by AI
Phishing attacks are about to get a whole lot more convincing. A new report warns that scammers are now using AI to scrape information about you from your online profiles in order to send hyper-personalized emails which target your login credentials. By finding out everything from your employer to your interests, scammers can send emails which have a far greater chance of appearing to be genuine ... Phishing has been around forever. A scammer sends out mass emails claiming to be from banks, email providers, crypto currency providers, and popular companies like Amazon and Apple. The fake content is usually intended to create a sense of urgency. For example, your bank warning you about a fraudulent transaction, Amazon sending you an invoice for an expensive product you haven't ordered, or Apple telling you your iCloud account is about to be cancelled. The hope is that victims will be panicked into clicking the link and using their login credentials before they've had time to think about whether or not it is real. Most of these attacks are very generic, but a Financial Times report warns that this is now changing. Scammers are starting to use AI to scrape public profiles in order to generate emails which appear to know a lot about you, and are therefore likely to fool more people into thinking them genuine. Leading companies such as British insurer Beazley and ecommerce group eBay have warned of the rise of fraudulent emails containing personal details probably obtained through AI analysis of online profiles. "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," said Beazley's chief information security officer Kirsty Kelly. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." Highly personalized phishing emails are also far more likely to make it through filters created by both corporations and email providers like Apple and Google. Right now, the main targets appear to be corporate employees, with the AI bots even copying the writing style of particular companies from their websites and other public content to add weight to their scams. But it's likely the same techniques will be used against consumers too, doing things like scraping your social media profiles. The main safeguard against phishing attacks is to never click on links sent in emails. Always use your own bookmarks, or type in a known URL.
[6]
Corporate Executives Said to Be Targeted by AI-Generated Phishing Scams
eBay researcher reportedly said that AI has made cyber crimes easier eBay, the ecommerce giant and several other companies are reportedly witnessing an increased volume of personalised phishing attacks aimed towards high-level employees. As per the report, these phishing scams are being carried out using artificial intelligence (AI) systems to make them appear human-like and avoid the telltale signs of a typical scam email. These cyber attackers are reportedly also using AI to scrape and analyse data about company executives to add a personal touch to the messages. Basic security filters are said to be insufficient to stop such emails at an organisational level. According to a Financial Times report, companies such as eBay and the UK-based insurance firm Beazley have highlighted the increase in fraudulent emails that contain personal information about their executive-level employees. Kirsty Kelly, the chief informational security officer at Beazley, told the publication that AI is suspected to be behind these attacks due to the personal nature of the emails. Kelly reportedly also added that these targeted phishing attacks have likely been conducted after a large amount of data about employees was scraped from various sources. Notably, phishing scams involve tricking an individual into revealing sensitive and financial information by pretending to be a trustworthy entity. These are usually carried out over emails, text messages, or by sharing URLs to fraudulent websites. However, typical phishing attacks are impersonal and often include vague information and grammatical errors leading to a low success rate. However, as per the report, these AI-generated phishing scams are different in a way that they both use highly emotive language and share personal information about the target. These emails can be more convincing and could trigger a positive response from individuals. Highlighting that generative AI tools lower the barriers in carrying out cyber attacks, Nadezda Demidova, a cybercrime security researcher at eBay, told the Financial Times, "We've witnessed a growth in the volume of all kinds of cyber attacks." She reportedly added that a particular area of concern was "polished and closely targeted" phishing scams. The security researcher reportedly also explained that basic security filters which usually identify and block bulk phishing campaigns could struggle to stop AI-generated emails as even high-volume attacks can be carried out where each email is unique and appears to be sent by legitimate senders.
[7]
How AI is making phishing scams unstoppable
According to a recent Financial Times report, corporate executives are increasingly targeted by sophisticated phishing scams generated by artificial intelligence (AI) bots, as analyzed by major companies like Beazley and eBay. The prevalence of these hyper-personalized attacks raises significant security concerns. Kirsty Kelly, chief information security officer at Beazley, emphasized the severity of the issue, stating, "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it." She noted that these scams utilize extensive information scraped about individuals through AI analysis of online profiles. AI's rapid development enables it to process vast amounts of data, mimicking the tone and style of individuals or companies, making phishing scams convincingly realistic. Cybersecurity experts, including eBay's Nadezda Demidova, highlighted that the availability of generative AI tools has significantly lowered the barrier for engaging in advanced cyber crime, contributing to a noted increase in the volume of "polished and closely targeted" phishing scams. More than 90 percent of successful cyber attacks begin with a phishing email, according to the US Cybersecurity and Infrastructure Security Agency. The global average cost of a data breach is projected to rise nearly 10 percent to $4.9 million in 2024, according to IBM, reflecting the escalating financial risks associated with these attacks. AI's effectiveness extends to crafting business email compromise scams, a type of phishing attack without malware where fraudsters deceive recipients into transferring funds or sharing sensitive information. The FBI reported that these scams have cost victims worldwide over $50 billion since 2013. Sean Joyce, global cyber security lead at PwC, noted that AI is utilized to identify vulnerabilities in both code and the human chain. Moreover, AI-generated phishing scams are more likely to bypass corporate email filters and cybersecurity training. Demidova pointed out that standard filters, which typically block bulk phishing attempts, may struggle against dynamically reworded scams generated in rapid succession by AI.
[8]
Corporate executives are being increasingly targeted by AI phishing scams
Top businesses such as eBay and Beazley are reporting huge increases in the volume of personalized phishing attacks levied against high-level employees. A recent report by the Financial Times, outlined the growing concerns about the growing use of AI in targeted scams against executives at major firms. "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," Beazley's chief information security officer Kirsty Kelly told the publication. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." By processing vast amounts of data, AI can quickly scrape and retain information about victims, and mimic the tone and style of an individual or company. This means that the hyper-personalized phishing scams are becoming increasingly convincing and harder to spot. As these attacks become more sophisticated, they are costing victims more and more. Almost all (90%) of successful cyber attacks originate with a phishing email, and the global average cost of a data breach has risen almost 10% to $4.9m in 2024, IBM reports. Phishing attacks have been on the rise for a long time, with some businesses receiving up to 36 phishing emails per day, and reports reveal a 28% increase in phishing attacks in the second quarter of 2024. Until recently, phishing attacks were generally impersonal, including only vague information with a reasonably low success rate. However, AI tools are lowering the entry threshold for these types of attacks, with 'polished and closely targeted' scams being seen in huge volumes. "Businesses need a multi-layered approach to avoid any damage as a consequence of phishing attacks." said Tim Callan, Chief Compliance Officer at Sectigo. "Employees should be trained in order to be able to realise when they could be targets of phishing attacks, but businesses should also look at tried-and-tested technologies and implement phishing prevention infrastructures through their IT ecosystems."
[9]
AI Fuels Reported Rise in 'Polished' Phishing Scams | PYMNTS.com
Phishing scams targeting corporate executives are reportedly increasing thanks to AI. Companies like eBay and British insurer Beazley have warned about an uptick in fraudulent emails containing personal details, likely obtained via artificial intelligence (AI) analysis of online profiles, the Financial Times (FT) reported Thursday (Jan. 2). "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it," Kirsty Kelly, chief information security officer at Beazley, told the FT. "We're starting to see very targeted attacks that have scraped an immense amount of information about a person." According to the report, cyber security experts say these attacks are increasing as AI grows in sophistication. AI bots can quickly consume mass quantities of information about a company's or person's style and tone and recreate them to plot an effective scam. They can also scrape victims' online and social media presence to find the topics they may be most likely to respond to -- helping hackers create large-scale bespoke phishing scams. "The availability of generative AI tools lowers the entry threshold for advanced cyber crime," said Nadezda Demidova, a cybercrime security researcher at eBay. "We've witnessed a growth in the volume of all kinds of cyber attacks," particularly in "polished and closely targeted" phishing scams, she added. AI helped add to a larger cyberattack landscape in 2024, PYMNTS wrote recently, part of a catalogue of threats that include ransomware, zero-day exploits and supply chain attacks. "It is essentially an adversarial game; criminals are out to make money and the [business] community needs to curtail that activity. What's different now is that both sides are armed with some really impressive technology," Michael Shearer, chief solutions officer at Hawk, said in an interview with PYMNTS. Training and education, PYMNTS wrote, remain crucial components of a robust cybersecurity strategy, as people are often the most vulnerable point in any system. Employees should be updated regularly on the latest phishing tactics and cyber threats, while simulated real-world attack scenarios can help bolster preparedness and resilience. And while AI can help cybercriminals carry out their schemes, it can also help companies shore up their defenses. Research from the PYMNTS Intelligence report "The AI MonitorEdge Report: COOs Leverage GenAI to Reduce Data Security Losses" shows that 55% of companies are employing AI-powered cybersecurity measures. That survey, taken in August of 2024, marked a sharp increase from the 17% of chief operating officers who reported using AI-driven security tools in May.
Share
Share
Copy Link
AI-generated phishing emails are becoming increasingly sophisticated, targeting executives and individuals with hyper-personalized content. This new wave of cyber attacks poses significant challenges for email security systems and users alike.
Artificial Intelligence (AI) is revolutionizing the landscape of cyber threats, particularly in the realm of phishing attacks. Cybercriminals are now leveraging AI tools to create highly personalized and convincing phishing emails that are increasingly difficult for victims to identify as fraudulent 12. This new wave of sophisticated attacks poses significant challenges for traditional email security systems and individual users alike.
AI-powered tools are being used to analyze social media activity and gather extensive information about potential targets 2. This allows scammers to craft messages that appear to come from trusted sources, including family and friends. The AI bots can quickly ingest large quantities of data about the tone and style of a company or individual, replicating these features to create convincing scams 34.
Kristy Kelly, Chief Information Security Officer at Beazley, emphasizes the growing personal nature of these scams: "This is getting worse and it's getting very personal, and this is why we suspect AI is behind a lot of it. We're starting to see very targeted attacks that have scraped an immense amount of information about a person" 4.
Traditional email defenses are proving inadequate against these sophisticated AI-driven attacks. Popular email services such as Gmail, Outlook, and Apple Mail currently lack sufficient defenses to combat this new threat effectively 12. The hyper-personalized nature of these emails makes it challenging for standard security filters to identify them as malicious.
Corporate executives are particularly at risk, with an influx of hyper-personalized phishing scams targeting high-level employees 34. These attacks often involve business email compromise (BEC) scams, which have cost victims worldwide more than $50 billion since 2013, according to FBI data 4.
Phishing remains a primary method for initiating cyber breaches, with over 90% of successful attacks starting with such messages 14. As these attacks become more sophisticated, their consequences have become increasingly expensive. The global average cost of a data breach rose nearly 10% to $4.9 million in 2024, according to IBM 4.
Cybersecurity experts recommend several measures to protect against these advanced phishing attempts:
As AI technology continues to advance rapidly, both cyber attacks and cybersecurity measures are expected to evolve. Dr. Dorit Dor, Chief Technology Officer at Check Point, predicts: "In 2025, AI will drive both attacks and protections. Security teams will rely on AI-powered tools tailored to their unique environments, but adversaries will respond with increasingly sophisticated, AI-driven phishing and deepfake campaigns" 2.
The rise of AI-powered phishing attacks marks a new era in cybersecurity challenges, requiring heightened awareness and advanced protective measures from both organizations and individuals.
Reference
[1]
[4]
A sophisticated AI-based scam targeting Gmail users combines spoofed phone numbers, fake emails, and AI-generated voices to trick victims into revealing their account credentials.
11 Sources
11 Sources
A sophisticated AI-powered scam targeting Gmail users combines deepfake robocalls and fake emails to gain access to accounts, highlighting the growing threat of AI in cybercrime.
2 Sources
2 Sources
Kaspersky explores how AI is revolutionizing phishing attacks, making them more sophisticated and difficult to detect, posing a significant threat even to experienced employees.
2 Sources
2 Sources
A sophisticated AI-powered phishing scam targeting Gmail users has been confirmed by Google, prompting warnings from the FBI. The scam involves convincing fake support calls and emails, highlighting the growing threat of AI in cybercrime.
3 Sources
3 Sources
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved