The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 19 Mar, 4:03 PM UTC
2 Sources
[1]
How AI agents help hackers steal your confidential data - and what to do about it
Cybercrooks are using automated AI bots to generate multiple login attempts across a range of services. And it's about to get much worse. Like many people, cybercriminals are using artificial intelligence to help them work faster, easier, and smarter. With automated bots, account takeovers, and social engineering, a savvy scammer knows how to enhance their usual tactics with an AI spin. A new report from Gartner shows how this is playing out now and how it may get worse in the near future. Account takeovers have become a persistent area of attack for one major reason -- weak authentication, said Gartner VP Analyst Jeremy D'Hoinne. Attackers can use various methods to access account passwords, including data breaches and social engineering. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses Once a password is compromised, AI steps in. Cybercriminals will use automated AI bots to generate multiple login attempts across a range of services. The goal is to see if those same credentials are being used on multiple platforms and, hopefully, ones that will prove lucrative. Find the right type of site, and the criminal can gather all the related data for a full account takeover. If the hacker doesn't want to carry out the attack themselves, they can always sell the information on the dark web, where willing buyers will grab it. "Account takeover (ATO) remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering, and malware," D'Hoinne said in the report. "Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms." With AI now in their arsenal, attackers can more easily automate the steps required for an account takeover. As this trend grows, Gartner predicts that the time needed to take over an account will drop by 50% in another two years. Beyond assisting with account takeovers, AI can help cybercriminals carry out deepfake campaigns. Even now, attackers are using a combination of social engineering tactics with deepfake audio and video. By calling an unsuspecting employee and spoofing the voice of a trusted contact or executive, the scammer hopes to trick them into transferring money or divulging confidential information. Only a few high-profile cases have been reported, but they've resulted in large financial damages to the victimized companies. Detecting a deepfake voice is still a challenge, especially in person-to-person voice and video calls. With this growing trend, Gartner expects that 40% of social engineering attacks will target executives as well as the general workforce by 2028. Also: Want to win in the age of AI? You can either build it or build your business with it "Organizations will have to stay abreast of the market and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques," said Manuel Acosta, senior director analyst at Gartner. "Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step." Thwarting AI-powered attacks How can individuals and organizations thwart these types of AI-powered attacks? "To combat emerging challenges from AI-driven attacks, organizations must leverage AI-powered tools that can provide granular real-time environment visibility and alerting to augment security teams," said Nicole Carignan, senior VP for security & AI strategy at security provider Darktrace. Also: 7 essential password rules to follow, according to security experts "Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the loop modes, to accelerate security team response," Carignan explained. "Through this approach, the adoption of AI technologies -- such as solutions with anomaly-based detection capabilities that can detect and respond to never-before-seen threats --- can be instrumental in keeping organizations secure." Other tools that can help protect you against account compromise are multi-factor authentication and biometric verification, such as facial or fingerprint scans. "Cybercriminals are not only relying on stolen credentials, but also on social manipulation, to breach identity protections," said James Scobey, chief information security officer at Keeper Security. "Deepfakes are a particular concern in this area, as AI models make these attack methods faster, cheaper, and more convincing. As attackers become more sophisticated, the need for stronger, more dynamic identity verification methods - such as multi-factor authentication (MFA) and biometrics - will be vital to defend against these progressively nuanced threats. MFA is essential for preventing account takeovers." Also: Tax scams are getting sneakier - 10 ways to protect yourself before it's too late In its report, Gartner also offered a few tips for dealing with social engineering and deepfake attacks.
[2]
Gartner Predicts AI Agents Will Reduce The Time It Takes To Exploit Account Exposures by 50% by 2027
AI Agents Will Increasingly Exploit Weak Authentication by Automating Credential Theft and Compromising Authentication Communication Channels By 2027, AI agents will reduce the time it takes to exploit account exposures by 50% according to Gartner, Inc. "Account takeover (ATO) remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering and malware," said Jeremy D'Hoinne, VP Analyst at Gartner. "Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms." AI agents will enable automation for more steps in ATO, from social engineering based on deepfake voices, to end-to-end automation of user credential abuses. Because of this, vendors will introduce products web, app, API and voice channels to detect, monitor and classify interactions involving AI agents. "In the face of this evolving threat, security leaders should expedite the move toward passwordless phishing-resistant MFA," said Akif Khan, VP Analyst at Gartner. "For customer use cases in which users may have a choice of authentication options, educate and incentivize users to migrate from passwords to multidevice passkeys where appropriate." Defending Against The Rise and Expansion of Social Engineering Attacks Along with ATO, technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts 40% of social engineering attacks will target executives as well as the broader workforce by 2028. Attackers are now combining social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls. Although only a few high-profile cases have been reported, these incidents have underscored the credibility of the threat and resulted in substantial financial losses for victim organizations. The challenge of detecting deepfakes is still in its early stages, particularly when applied to the diverse attack surfaces of real-time person-to-person voice and video communications across various platforms. "Organizations will have to stay abreast of the market, and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques," said Manuel Acosta, Sr. Director Analyst at Gartner. "Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step." Gartner clients can learn more in "Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity." Learn how to evaluate cybersecurity AI assistants in How to Evaluate Cybersecurity AI Assistants. About Gartner for Cybersecurity Leaders Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy to business objectives and build programs to balance protection with the needs of the organization. Additional information is available at https://www.gartner.com/en/cybersecurity. Follow news and updates from Gartner for Cybersecurity Leaders on X and LinkedIn using #GartnerSEC. Visit the Gartner Newsroom for more information and insights. About Gartner Gartner, Inc. (NYSE: IT) delivers actionable, objective insight that drives smarter decisions and stronger performance on an organization's mission-critical priorities. To learn more, visit gartner.com
Share
Share
Copy Link
Gartner report reveals how cybercriminals are leveraging AI to enhance account takeovers and social engineering attacks, predicting a 50% reduction in exploitation time by 2027 and increased targeting of executives.
Artificial Intelligence (AI) is revolutionizing various industries, and unfortunately, cybercrime is no exception. A recent Gartner report highlights how cybercriminals are leveraging AI to enhance their tactics, particularly in account takeovers (ATO) and social engineering attacks 1. The integration of AI into cybercriminal operations is expected to significantly reduce the time required to exploit account vulnerabilities, posing a growing threat to individuals and organizations alike.
Account takeovers remain a persistent attack vector due to weak authentication methods, primarily passwords. Jeremy D'Hoinne, VP Analyst at Gartner, explains that attackers gather credentials through various means, including data breaches, phishing, social engineering, and malware 2. Once obtained, cybercriminals employ automated AI bots to attempt multiple login attempts across various services, exploiting the common practice of password reuse.
The introduction of AI agents enables automation for more steps in the account takeover process. This includes social engineering based on deepfake voices and end-to-end automation of user credential abuses. Gartner predicts that by 2027, AI agents will reduce the time it takes to exploit account exposures by 50% 2. This alarming trend necessitates the development of new security products to detect, monitor, and classify interactions involving AI agents across web, app, API, and voice channels.
Beyond account takeovers, AI is enabling more sophisticated social engineering attacks through deepfake technology. Cybercriminals are combining traditional social engineering tactics with counterfeit reality techniques, such as deepfake audio and video, to deceive employees during calls 1. These attacks can result in substantial financial losses for victim organizations, as evidenced by several high-profile cases.
Gartner predicts that by 2028, 40% of social engineering attacks will target both executives and the broader workforce 2. The challenge of detecting deepfakes in real-time person-to-person voice and video communications across various platforms remains significant, making it crucial for organizations to stay informed about evolving threats and adapt their security measures accordingly.
To combat these emerging threats, cybersecurity experts recommend several strategies:
Implement AI-powered security tools: Nicole Carignan, Senior VP for Security & AI Strategy at Darktrace, suggests using AI-powered tools that provide real-time environment visibility and alerting to augment security teams 1.
Adopt passwordless authentication: Akif Khan, VP Analyst at Gartner, advises expediting the move toward passwordless phishing-resistant multi-factor authentication (MFA) and educating users about migrating to multidevice passkeys 2.
Enhance employee training: Organizations should educate employees about the evolving threat landscape, focusing on social engineering with deepfakes 2.
Implement strong authentication methods: James Scobey, Chief Information Security Officer at Keeper Security, emphasizes the importance of multi-factor authentication and biometric verification to defend against sophisticated attacks 1.
As AI continues to evolve, so too will the tactics employed by cybercriminals. Organizations and individuals must remain vigilant and proactive in their approach to cybersecurity, leveraging AI-powered defenses to stay ahead of these emerging threats.
As AI transforms the cybersecurity landscape, businesses are increasingly adopting AI-powered defenses to combat sophisticated AI-driven threats, highlighting both the risks and opportunities in this technological arms race.
3 Sources
3 Sources
As AI enhances cyber threats, organizations must adopt AI-driven security measures to stay ahead. Experts recommend implementing zero-trust architecture, leveraging AI for defense, and addressing human factors to combat sophisticated AI-powered attacks.
4 Sources
4 Sources
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
3 Sources
3 Sources
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
Kaspersky, a leading cybersecurity firm, highlights the growing concerns surrounding the use of AI in cyber threats. The company emphasizes the need for proactive measures to combat AI-powered attacks in an increasingly complex digital environment.
2 Sources
2 Sources