Curated by THEOUTPOST
On Wed, 12 Mar, 5:38 PM UTC
4 Sources
[1]
4 expert security tips for navigating AI-powered cyber threats
Cybercriminals are weaponizing artificial intelligence (AI) across every attack phase. Large language models (LLMs) craft hyper-personalized phishing emails by scraping targets' social media profiles and professional networks. Generative adversarial networks (GAN) produce deepfake audio and video to bypass multi-factor authentication. Automated tools like WormGPT enable script kiddies to launch polymorphic malware that evolves to evade signature-based detection. These cyber attacks aren't speculative, either. Organizations that fail to develop their security strategies risk being overrun by an onslaught of hyper-intelligent cyber threats -- in 2025 and beyond. Also: Want to win in the age of AI? You can either build it or build your business with it To better understand how AI impacts enterprise security, I spoke with Bradon Rogers, a cloud and enterprise cybersecurity veteran, about this new era of digital security, early threat detection, and how you can prepare your team for AI-enabled attacks. But first, some background on what to expect. AI provides malicious actors with sophisticated tools that make cyber attacks more precise, persuasive, and challenging to detect. For example, modern generative AI systems can analyze vast datasets of personal information, corporate communications, and social media activity to craft hyper-targeted phishing campaigns that convincingly mimic trusted contacts and legitimate organizations. This capability, combined with automated malware that adapts to defensive measures in real-time, has dramatically increased both the scale and success rate of attacks. Deepfake technology enables attackers to generate compelling video and audio content, facilitating everything from executive impersonation fraud to large-scale disinformation campaigns. Recent incidents include a $25 million theft from a Hong Kong-based company via deepfake video conferencing and numerous cases of AI-generated voice clips being used to deceive employees and family members into transferring funds to criminals. Also: Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds AI-enabled automated cyber attacks led to the innovation of "set-and-forget" attack systems that continuously probe for vulnerabilities, adapt to defensive measures, and exploit weaknesses without human intervention. One example is the 2024 breach of major cloud service provider AWS. AI-powered malware systematically mapped network architecture, identified potential vulnerabilities, and executed a complex attack chain that compromised thousands of customer accounts. These incidents highlight how AI isn't just augmenting existing cyber threats but creating entirely new categories of security risks. Here are Rogers' suggestions for how to tackle the challenge. The traditional security perimeter is no longer sufficient in the face of AI-enhanced threats. A zero-trust architecture operates on a "never trust, always verify" principle, ensuring that every user, device, and application is authenticated and authorized before gaining access to resources. This approach minimizes the risk of unauthorized access, even if an attacker manages to breach the network. "Enterprises must verify every user, device, and application -- including AI -- before they access critical data or functions," underscores Rogers, noting that this approach is an organization's "best course of action." By continuously verifying identities and enforcing strict access controls, businesses can reduce the attack surface and limit potential damage from compromised accounts. Also: This new AI benchmark measures how much models lie While AI poses challenges, it also offers powerful tools for defense. AI-driven security solutions can analyze vast amounts of data in real time, identifying anomalies and potential threats that traditional methods might miss. These systems can adapt to emerging attack patterns, providing a dynamic defense against AI-powered cyberattacks. Rogers adds that AI -- like cyber defense systems -- should never be treated as a built-in feature. "Now is the time for CISOs and security leaders to build systems with AI from the ground up," he says. By integrating AI into their security infrastructure, organizations can enhance their ability to detect and respond to incidents swiftly, reducing the window of opportunity for attackers. Organizations can reduce the risk of internal vulnerabilities by fostering a culture of security awareness and providing clear guidelines on using AI tools. Humans are complex, so simple solutions are often the best. "It's not just about mitigating external attacks. It's also providing guardrails for employees who are using AI for their own 'cheat code for productivity,'" Rogers says. Also: DuckDuckGo's AI beats Perplexity in one big way - and it's free to use Human error remains a significant vulnerability in cybersecurity. As AI-generated phishing and social engineering attacks become more convincing, educating employees about these evolving threats is even more crucial. Regular training sessions can help staff recognize suspicious activities, such as unexpected emails or requests that deviate from routine procedures. The accessibility of AI technologies has led to widespread adoption across various business functions. However, unsanctioned or unmonitored use of AI -- often called "shadow AI" -- can introduce significant security risks. Employees may inadvertently use AI applications that lack proper security measures, leading to potential data leaks or compliance issues. "We can't have corporate data flowing freely all over the place into unsanctioned AI environments, so a balance must be struck," Rogers explains. Implementing policies that govern AI tools, conducting regular audits, and ensuring that all AI applications comply with the organization's security standards are essential to mitigating these risks. The complexity of AI-driven threats necessitates collaboration with experts specializing in AI and cybersecurity. Partnering with external firms can provide organizations access to the latest threat intelligence, advanced defensive technologies, and specialized skills that may not be available in-house. Also: How Cisco, LangChain, and Galileo aim to contain 'a Cambrian explosion of AI agents' AI-powered attacks require sophisticated countermeasures that traditional security tools often lack. AI-enhanced threat detection platforms, secure browsers, and zero-trust access controls analyze user behavior, detect anomalies, and prevent malicious actors from gaining unauthorized access. Rogers highlights that the innovative solutions for the enterprise "are a missing link in the zero-trust security framework. [These tools] provide deep, granular security controls that seamlessly protect any app or resource across public and private networks." These tools leverage machine learning to continuously monitor network activity, flag suspicious patterns, and automate incident response, reducing the risk of AI-generated attacks infiltrating corporate systems.
[2]
Why AI-powered security tools are your secret weapon against tomorrow's attacks
It's an age-old adage of cyber defense that an attacker has to find just one weakness or exploit, but the defender has to defend against everything. The challenge of AI, when it comes to cybersecurity, is that it is an arms race in which weapons-grade AI capabilities are available to both attackers and defenders. Cisco is one of the world's largest networking companies. As such, it is on the front lines of defending against AI-powered cyberattacks. Also: 4 expert security tips for navigating AI-powered cyber threats In this exclusive interview, ZDNET sits down with Cisco's AI products VP, Anand Raghavan, to discuss how AI-powered tools are revolutionizing cybersecurity and expanding organizations' attack surfaces. ZDNET: Can you briefly introduce yourself and describe your role at Cisco? Anand Raghavan: I'm Anand Raghavan, VP Products, AI for the AI Software and Platforms Group at Cisco. We focus on working with product teams across Cisco to bring together transformative, safe, and secure Gen AI-powered products to our customers. Two products that we launched in the recent past are the Cisco AI Assistant which makes it easy for our customers to interact with our products using natural language, and Cisco AI Defense which enables safe and secure use of AI for employees and for cloud applications that organizations build for their customers. ZDNET: How is AI transforming the nature of threats enterprises and governments face at the network level? AR: AI has completely changed the game for network security, enabling hackers to launch more sophisticated and less time-intensive attacks. They're using automation to launch more personalized and effective phishing campaigns, which means employees may be more likely to fall for phishing attempts. We're seeing malware that uses AI to adapt to avoid detection from traditional network security tools. As AI tools become more common, they expand the attack surface that security teams need to manage and they exacerbate the existing problem of shadow IT. Just as companies have access to AI to build new and interesting applications, bad actors have access to the same sets of technologies to create new attacks and threats. It has become more important than ever to use the latest in advancements in AI to be able to identify these new kinds of threats and to automate the remediation of these threats. Also: The head of US AI safety has stepped down. What now? Whether it is malicious connections that can be stopped in real-time in the encrypted domain within our firewalls using our Encrypted Visibility Engine technology, or our language-based detectors of fraudulent emails in our Email Threat Defense product, it has become critical to understand the new attack surface of threats and how to protect against them. With the advent of customer-facing AI applications, models and model-related vulnerabilities have become critical new attack surfaces. AI models can be the target of threats. Prompt injection or denial of service attacks may inadvertently leak sensitive data. The security industry has responded quickly to incorporate AI into solutions to spot unusual patterns and detect suspicious network activity. but it's a race to stay one step ahead. ZDNET: How do AI-driven tools help enterprises stay ahead of increasingly sophisticated cyber adversaries? AR: In an evolving threat landscape, AI-powered security tools deliver continuous and self-optimizing monitoring at a scale that manual monitoring can't match. Using AI, a security team can analyze data from various sources across a company's entire ecosystem and detect unusual patterns or suspicious traffic that could indicate a data breach. Because AI analyzes this data more quickly than humans, organizations can respond to incidents in near real-time to mitigate potential threats. Also: What is DeepSeek AI? Is it safe? Here's everything you need to know When it comes to threat monitoring and detection, AI offers security professionals a "better together" scenario where the human professionals get visibility and response times with the AI that they wouldn't be able to achieve solo. In a world where experienced top-level Tier 3 analysts in the SOC [security operations center] are harder to find, AI can be an integral part of an organization's strategy to aid and assist Tier 1 and Tier 2 analysts in their jobs and drastically reduce their mean time to remediation for any new discovered incidents and threats. Workflow automation for XDR [extended detection and response] using AI will help enterprises stay ahead of cyber adversaries. ZDNET: Explain AI Defense, and what is the main problem it aims to solve? AR: When you think about how quickly people have adopted AI applications, it's off the charts. Within organizations, however, AI development and adoption isn't moving as quickly as it could be because people still aren't sure it's safe or they aren't confident they can keep it secure. According to Cisco's 2024 AI Readiness Index, only 29% of organizations feel fully equipped to detect and prevent unauthorized tampering with AI. Companies can't afford to risk security by moving too quickly, but they also can't risk being lapped by their competition because they didn't embrace AI. Also: Tax scams are getting sneakier - 10 ways to protect yourself before it's too late AI Defense enables and safeguards AI transformation within enterprises, so they don't have to make this tradeoff. In the future, there will be AI companies and companies that are irrelevant. Thinking about this challenge at a high level, AI poses two overarching risks to an enterprise. The first is the risk of sensitive data exposure from employees misusing third-party AI tools. Any intellectual property or confidential information shared with an unsanctioned AI application is susceptible to leakage and exploitation. The second risk is related to how businesses develop and deploy their own AI applications. AI models need to be protected from threats such as prompt injections or training data poisoning, so they continue to operate the way that they are intended and are safe for customers to use. Also: AI is changing cybersecurity and businesses must wake up to the threat Cisco AI Defense addresses both areas of AI risk. Our AI Access solution gives security teams a comprehensive view of third-party AI applications in use and enables them to set policies that limit sensitive data sharing or restrict access to unsanctioned tools. For businesses developing their own AI applications, AI Defense uses algorithmic red team technology to automate vulnerability assessments for models. After identifying these risks in seconds, AI Defense provides runtime guardrails to keep AI applications protected against threats like prompt injections, data extraction, and denial of service in real-time. ZDNET: How does AI Defense differentiate itself from existing security frameworks? AR: The safety and security of AI is a massive new challenge that enterprises are only just beginning to contend with. After all, AI is fundamentally different from traditional applications and existing security frameworks don't necessarily apply in the same ways. AI Defense is purpose-built to protect enterprises from the risks of AI application usage and development. Our solution is built on Cisco's own custom AI models with two main principles: continuous AI validation and protection at scale. Also: That weird CAPTCHA could be a malware trap - here's how to protect yourself When it comes to securing traditional applications, companies use a red team of human security professionals to try to jailbreak the app and find vulnerabilities. This approach doesn't provide anywhere near the scale needed to validate non-deterministic AI models. You'd need teams of thousands working for weeks. This is why AI Defense uses an algorithmic red teaming solution that continuously monitors for vulnerabilities and recommends guardrails when it finds them. Cisco's platform approach to security means that these guardrails are distributed across the network and the security team gets total visibility across their AI footprint. ZDNET: What is Cisco's vision for integrating AI Defense with broader enterprise security strategies? AR: Cisco's 2024 AI Readiness Index showed that while organizations face mounting pressure to adopt AI, most organizations are still not ready to capture AI's potential and many lack awareness around AI security risks. With solutions like AI Defense, Cisco is enabling organizations to unlock the benefits of AI and do so securely. Cisco AI Defense is designed to address the security challenges of a multi-cloud, multi-model world in which organizations operate. Also: How Cisco, LangChain, and Galileo aim to contain 'a Cambrian explosion of AI agents' It gives security teams visibility and control over AI applications and is frictionless for developers, saving them time and resources so they can focus on innovating. When an organization is looking to adopt AI, both for employees and to build customer-facing applications, their adoption lifecycle has the following steps: These are the core areas that AI Defense supports as part of its capabilities. Enforcement can happen in a Secure Access or SASE [secure access service edge] product for employee protection, and enforcement for cloud applications can happen in a Cloud Protection Suite application like Cisco Multicloud Defense. ZDNET: What strategies should enterprises adopt to mitigate the risks of adversarial attacks on AI systems? AR: AI applications introduce a new class of security risks to an organization's tech stack. Unlike traditional apps, AI apps include models, which are unpredictable and non-deterministic. When models don't behave as they are supposed to, they can result in hallucinations and other unintended consequences. Models can also fall victim to attacks like training data poisoning, prompt injection, and jailbreaking. Model builders and developers will both have security layers in place for AI models, but in a multi-cloud, multi-model system, there will be inconsistent safety and security standards. To protect against AI tampering and the risk of data leakage, organizations need a common substrate of security across all clouds, apps, and models. Also: This powerful firewall delivers enterprise-level security at a home office price This becomes even more important when you have fragmented accountability across stakeholders -- model builders, app builders, governance, risk, and compliance teams. Having a common substrate in terms of an AI security product that can monitor and enforce the right set of guardrails that protect across all categories of AI safety and security as outlined by standards such as MITRE ATLAS and OWASP LLM10 and NIST RMF becomes vital. ZDNET: Could you share a real-world scenario or case study where AI Defense could prevent a critical security breach? AR: As I mentioned, AI Defense covers the two main areas of enterprise AI risk: the usage of third-party AI tools and the development of new AI applications. Let's look at incident scenarios for each of these use cases. In the first scenario, an employee shares information about some of your customers with an unsanctioned AI assistant for help preparing a presentation. This confidential data can become codified in the AI's retraining data, meaning it can be shared with other public users. AI Defense can limit this data sharing or restrict access to the unsanctioned tool entirely, mitigating the risk of what would otherwise be a devastating privacy violation. Also: The best malware removal software: Expert tested and reviewed In the second scenario, an AI developer uses an open-source foundation model to create an AI customer service assistant. They fine-tune it for relevance but inadvertently weaken its built-in guardrails. Within days, it's hallucinating incorrect responses and becoming more susceptible to adversarial attack. With continuous monitoring and vulnerability testing, AI Defense would identify the flaw in the model and apply your preferred guardrails automatically. ZDNET: What emerging trends in AI security do you foresee shaping the future of cybersecurity? AR: One critical aspect of AI in security is that we're seeing exploit times decrease. Security professionals have a shorter window than ever between when a vulnerability is discovered and when it is exploited by attackers. As AI makes cybercriminals faster and their attacks more efficient, it's increasingly urgent that organizations detect and patch vulnerabilities quickly. AI can significantly speed up the detection of vulnerabilities so security teams can respond in real time. Also: Most people worry about deepfakes - and overestimate their ability to spot them Deepfakes are going to be a massive security concern over the next five years. In many ways, the security industry is just getting ready for deepfakes and how to defend against them, but this will be a critical area of vulnerability and risk for organizations. The same way denial-of-service attacks were a major concern 10 years ago and ransomware has been a critical threat in more recent years, deepfakes are going to keep a lot of security professionals up at night. ZDNET: How can governments and enterprises collaborate to build robust AI security standards? AR: By working together, governments and the private sector can tap into a deep pool of knowledge and wide spectrum of perspectives to develop best practices in a quickly evolving risk landscape of AI and security. Last year, Cisco worked with the Cybersecurity and Infrastructure Security Agency's (CISA) Joint Cyber Defense Collaborative (JCDC), which brought together industry leaders from some of the biggest players in tech, such as OpenAI, Amazon, Microsoft, and Nvidia, and government agencies to collaborate with the goal of enhancing organizations' collective ability to respond to AI-related security incidents. Also: When you should use a VPN - and when you shouldn't We participated in a tabletop exercise and collaborated on the recently released "AI Security Incident Collaboration Playbook," which is a guide for collaboration between government and private industry. It offers practical, actionable advice for responding to AI-related security incidents and guidance on voluntarily sharing information related to vulnerabilities associated with AI systems. Together, government and the private sector can raise awareness of security risks facing this critical technology. ZDNET: How do you see AI bridging the gap between cyberattack prevention and incident response? AR: We're already seeing AI-enabled security solutions deliver continuous and scalable monitoring that helps human security teams detect suspicious network activity and vulnerabilities. We're in the stage where AI is an invaluable tool that gives security professionals better visibility and recommendations on how to respond to security incidents. Also: Why OpenAI's new AI agent tools could change how you code Eventually, we'll reach a point where AI can automatically deploy and implement security patches with oversight from a human security professional. The benefits, in a nutshell, are continuity (always monitoring), scalability (as your attack surface grows, AI helps you manage it), accuracy (AI can detect even more subtle indicators that a human might miss), and speed (faster than manual review). AI is transforming cybersecurity, but are enterprises truly prepared for the risks it brings? Have you encountered AI-driven cyber threats in your organization? Do you think AI-powered security solutions can stay ahead of increasingly sophisticated attacks? How do you see the balance between AI as a security tool and a potential vulnerability? Are companies doing enough to secure their AI models from exploitation? Let us know in the comments below.
[3]
Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
Cybercriminals are weaponizing artificial intelligence (AI) across every attack phase. Large language models (LLMs) craft hyper-personalized phishing emails by scraping targets' social media profiles and professional networks. Generative adversarial networks (GAN) produce deepfake audio and video to bypass multi-factor authentication. Automated tools like WormGPT enable script kiddies to launch polymorphic malware that evolves to evade signature-based detection. These cyber attacks aren't speculative, either. Organizations that fail to develop their security strategies risk being overrun by an onslaught of hyper-intelligent cyber threats -- in 2025 and beyond. Also: Want to win in the age of AI? You can either build it or build your business with it To better understand how AI impacts enterprise security, I spoke with Bradon Rogers, an SVP at Intel Security and enterprise cybersecurity veteran, about this new era of digital security, early threat detection, and how you can prepare your team for AI-enabled attacks. But first, some background on what to expect. AI provides malicious actors with sophisticated tools that make cyber attacks more precise, persuasive, and challenging to detect. For example, modern generative AI systems can analyze vast datasets of personal information, corporate communications, and social media activity to craft hyper-targeted phishing campaigns that convincingly mimic trusted contacts and legitimate organizations. This capability, combined with automated malware that adapts to defensive measures in real-time, has dramatically increased both the scale and success rate of attacks. Deepfake technology enables attackers to generate compelling video and audio content, facilitating everything from executive impersonation fraud to large-scale disinformation campaigns. Recent incidents include a $25 million theft from a Hong Kong-based company via deepfake video conferencing and numerous cases of AI-generated voice clips being used to deceive employees and family members into transferring funds to criminals. Also: Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds AI-enabled automated cyber attacks led to the innovation of "set-and-forget" attack systems that continuously probe for vulnerabilities, adapt to defensive measures, and exploit weaknesses without human intervention. One example is the 2024 breach of major cloud service provider AWS. AI-powered malware systematically mapped network architecture, identified potential vulnerabilities, and executed a complex attack chain that compromised thousands of customer accounts. These incidents highlight how AI isn't just augmenting existing cyber threats but creating entirely new categories of security risks. Here are Rogers' suggestions for how to tackle the challenge. The traditional security perimeter is no longer sufficient in the face of AI-enhanced threats. A zero-trust architecture operates on a "never trust, always verify" principle, ensuring that every user, device, and application is authenticated and authorized before gaining access to resources. This approach minimizes the risk of unauthorized access, even if an attacker manages to breach the network. "Enterprises must verify every user, device, and application -- including AI -- before they access critical data or functions," underscores Rogers, noting that this approach is an organization's "best course of action." By continuously verifying identities and enforcing strict access controls, businesses can reduce the attack surface and limit potential damage from compromised accounts. Also: This new AI benchmark measures how much models lie While AI poses challenges, it also offers powerful tools for defense. AI-driven security solutions can analyze vast amounts of data in real time, identifying anomalies and potential threats that traditional methods might miss. These systems can adapt to emerging attack patterns, providing a dynamic defense against AI-powered cyberattacks. Rogers adds that AI -- like cyber defense systems -- should never be treated as a built-in feature. "Now is the time for CISOs and security leaders to build systems with AI from the ground up," he says. By integrating AI into their security infrastructure, organizations can enhance their ability to detect and respond to incidents swiftly, reducing the window of opportunity for attackers. Organizations can reduce the risk of internal vulnerabilities by fostering a culture of security awareness and providing clear guidelines on using AI tools. Humans are complex, so simple solutions are often the best. "It's not just about mitigating external attacks. It's also providing guardrails for employees who are using AI for their own 'cheat code for productivity,'" Rogers says. Also: DuckDuckGo's AI beats Perplexity in one big way - and it's free to use Human error remains a significant vulnerability in cybersecurity. As AI-generated phishing and social engineering attacks become more convincing, educating employees about these evolving threats is even more crucial. Regular training sessions can help staff recognize suspicious activities, such as unexpected emails or requests that deviate from routine procedures. The accessibility of AI technologies has led to widespread adoption across various business functions. However, unsanctioned or unmonitored use of AI -- often called "shadow AI" -- can introduce significant security risks. Employees may inadvertently use AI applications that lack proper security measures, leading to potential data leaks or compliance issues. "We can't have corporate data flowing freely all over the place into unsanctioned AI environments, so a balance must be struck," Rogers explains. Implementing policies that govern AI tools, conducting regular audits, and ensuring that all AI applications comply with the organization's security standards are essential to mitigating these risks. The complexity of AI-driven threats necessitates collaboration with experts specializing in AI and cybersecurity. Partnering with external firms can provide organizations access to the latest threat intelligence, advanced defensive technologies, and specialized skills that may not be available in-house. Also: How Cisco, LangChain, and Galileo aim to contain 'a Cambrian explosion of AI agents' AI-powered attacks require sophisticated countermeasures that traditional security tools often lack. AI-enhanced threat detection platforms, secure browsers, and zero-trust access controls analyze user behavior, detect anomalies, and prevent malicious actors from gaining unauthorized access. Rogers highlights that the innovative solutions for the enterprise "are a missing link in the zero-trust security framework. [These tools] provide deep, granular security controls that seamlessly protect any app or resource across public and private networks." These tools leverage machine learning to continuously monitor network activity, flag suspicious patterns, and automate incident response, reducing the risk of AI-generated attacks infiltrating corporate systems.
[4]
AI vs. AI: 6 ways enterprises are automating cybersecurity to counter AI-powered attacks
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Why is AI becoming essential for cybersecurity? Because every day, in fact every second, malicious actors are using artificial intelligence to widen the scope and speed of their attack methods. For one thing, as Adam Meyers, senior vice president at CrowdStrike, told VentureBeat in a recent interview, "The adversary is getting 10 to 14 minutes faster every year. As their breakout times shrink, defenders have to react even faster -- detecting, investigating and stopping threats before they spread. This is the game of speed." Meanwhile, Gartner wrote in its recent study, Emerging Tech Impact Radar: Preemptive Cybersecurity, that "[m]alicious actors are exploiting generative AI to launch attacks at machine speed. Organizations can no longer afford to wait for a breach to be detected before taking action. It has become crucial to anticipate potential attacks and prioritize preemptive mitigation measures with predictive analysis." And for its part, Darktrace's latest threat report reflects the new, ruthless mindset of cyberattackers willing to do whatever it takes to gain the speed and stealth they need to breach an enterprise, exfiltrating data, funds, and identities even before security teams know they've been hit. Their weaponization of AI extends beyond deepfakes into phishing email blasts that resemble legitimate marketing campaigns in scale and scope. One of the most noteworthy findings from Darktrace's research is the growing threat of weaponized AI and malware-as-a-service (MaaS). According to Darktrace's recent research, MaaS now constitutes 57% of all cyberattacks, signaling a significant acceleration toward automated cybercrime. AI is meeting cybersecurity's need for speed Breakout times are plummeting. That's a sure sign that attackers are moving faster and fine-tuning new techniques that perimeter-based legacy systems and platforms can't catch. Microsoft's Vasu Jakkal quantified this acceleration vividly in a recent VentureBeat interview: "Three years ago, we were seeing 567 password-related attacks per second. Today, that number has skyrocketed to 7,000 per second." Few understand this challenge better than Katherine Mowen, SVP of information security at Rate Companies (formerly Guaranteed Rate), one of the largest retail mortgage lenders in the U.S. With billions of dollars in transactions flowing through its systems daily, Rate Companies is a prime target for AI-driven cyberattacks, from credential theft to sophisticated identity-based fraud. As Mowen explained in a recent VentureBeat interview, "Because of the nature of our business, we face some of the most advanced and persistent cyber threats out there. We saw others in the mortgage industry getting breached, so we needed to ensure it didn't happen to us. I think what we're doing right now is fighting AI with AI." Rate Companies' strategy to attain greater cyber resilience is anchored in AI threat modeling, zero-trust security, and automated response, which offers valuable lessons for security leaders across industries. "Cyber attackers now leverage AI-driven malware that can morph in seconds. If your defenses aren't just as adaptive, you're already behind," CrowdStrike CEO George Kurtz told VentureBeat. The Rate Companies' Mowen, for example, is battling adversarial AI with a series of working defensive AI strategies. Fighting AI with AI: what's working VentureBeat sat down with a group of CISOs, who requested anonymity, to better understand their playbooks for fighting AI with AI. Here are six lessons learned from that session: Improving threat detection with self-learning AI is paying off. Adversarial AI is at the center of an increasingly large number of breaches today. One quick takeaway from all this activity is that signature-based detection is struggling, at best, to keep up with attackers' latest tradecraft. Cyberattackers aren't stopping at exploiting identities and their many vulnerabilities. They're progressing to using living-off-the-land (LOTL) techniques and weaponizing AI to bypass static defenses. Security teams are forced to shift from reactive to proactive defense. DarkTrace's report explains why. The company detected suspicious activity on Palo Alto firewall devices 17 days before a zero-day exploit was disclosed. That's just one of many examples of the rising number of AI-assisted attacks on critical infrastructure, which the report provides data on. Nathaniel Jones, VP of threat research at Darktrace, observed that "detecting threats after an intrusion is no longer enough. Self-learning AI pinpoints subtle signals humans overlook, enabling proactive defense." Consider automating phishing defenses with AI-driven threat detection. Phishing attacks are soaring, with over 30 million malicious emails detected by Darktrace in the last year alone. The majority, or 70%, are bypassing traditional email security by leveraging AI-generated lures that are indistinguishable from legitimate communications. Phishing and business email compromise (BEC) are two areas in which cybersecurity teams are relying on AI to help identify and stop breaches. "Leveraging AI is the best defense against AI-powered attacks," said Deepen Desai, chief security officer at Zscaler. The Rate Companies' Mowen emphasized the need for proactive identity security: "With attackers constantly refining their tactics, we needed a solution that could adapt in real time and give us deeper visibility into potential threats." AI-driven incident response: Are you fast enough to contain the threat? Every second counts in any intrusion or breach. With breakout times plummeting, there's no time to waste. Perimeter-based systems often have outdated code that hasn't been patched in years. That all fuels false alarms. Meanwhile, attackers who are perfecting weaponized AI are getting beyond firewalls and into critical systems in a matter of seconds. Mowen suggests that CISOs follow the Rate Companies' 1-10-60 SOC model, which looks to detect an intrusion in one minute, triage it in 10, and contain it within 60. She advises making this the benchmark for security operations. As Mowen warns, "Your attack surface isn't just infrastructure -- it's also time. How long do you have to respond?" Organizations that fail to accelerate containment risk prolonged breaches and higher damages. She recommends that CISOs measure AI's impact on incident response by tracking mean time to detect (MTTD), mean time to respond (MTTR), and false-positive reduction. The faster threats are contained, the less damage they can inflict. AI isn't just an enhancement -- it's becoming a necessity. Find new ways continuously to harden attack surfaces with AI. Every organization is grappling with the challenges of a constantly shifting series of attack surfaces that can range from a fleet of mobile devices to large-scale cloud migrations or a myriad of IoT sensors and endpoints. AI-driven exposure management proactively identifies and mitigates vulnerabilities in real time. At Rate Companies, Mowen stresses the necessity of scalability and visibility. "We manage a workforce that can grow or shrink quickly," Mowen said. The need to flex and adapt its business operations quickly is one of several factors that drove Rate's strategy to use AI for real-time visibility and automated detection of misconfigurations across its diverse cloud environments. Detect and reduce the number of insider threats using behavioral analytics and AI. Insider threats, exacerbated by the rise of shadow AI, have become a pressing challenge. AI-driven user and entity behavior analytics (UEBA) addresses this by continuously monitoring user behavior against established baselines and rapidly detecting deviations. Rate Companies faced significant identity-based threats, prompting Mowen's team to integrate real-time monitoring and anomaly detection. She noted: "Even the best endpoint protections don't matter if an attacker simply steals user credentials. Today, we operate with a 'never trust, always verify' approach, continuously monitoring every transaction." Vineet Arora, CTO at WinWire, observed that traditional IT management tools and processes often lack comprehensive visibility and control over AI applications, allowing shadow AI to thrive. He emphasized the importance of balancing innovation with security, stating, "Providing safe AI options ensures people aren't tempted to sneak around. You can't kill AI adoption, but you can channel it securely." Implementing UEBA with AI-driven anomaly detection strengthens security, reducing both risk and false positives. Human-in-the-loop AI: essential for long-term cybersecurity success. One of the main goals of implementing AI across any cybersecurity app, platform or product is for it to continually learn and augment the expertise of humans, not replace it. There needs to be a reciprocal relationship of knowledge for AI and human teams to both excel. "Many times, the AI doesn't replace the humans. It augments the humans," says Elia Zaitsev, CTO at CrowdStrike. "We can only build the AI that we're building so quickly and so efficiently and so effectively because we've had literally a decade-plus of humans creating human output that we can now feed into the AI systems." This human-AI collaboration is particularly critical in security operations centers (SOCs), where AI must operate with bounded autonomy, assisting analysts without taking full control. AI vs. AI: The future of cybersecurity is now AI-powered threats are automating breaches, morphing malware in real time and generating phishing campaigns nearly indistinguishable from legitimate communications. Enterprises must move just as fast, embedding AI-driven detection, response and resilience into every layer of security. Breakout times are shrinking, and legacy defenses can't keep up. The key is not just AI but AI working alongside human expertise. As security leaders like Rate Companies' Katherine Mowen and CrowdStrike's Elia Zaitsev emphasize, AI should amplify defenders, not replace them, enabling faster, smarter security decisions. Do you think AI will outpace human defenders in cybersecurity? Let us know!
Share
Share
Copy Link
As AI enhances cyber threats, organizations must adopt AI-driven security measures to stay ahead. Experts recommend implementing zero-trust architecture, leveraging AI for defense, and addressing human factors to combat sophisticated AI-powered attacks.
As we approach 2025 and beyond, the cybersecurity landscape is undergoing a dramatic transformation due to the integration of artificial intelligence (AI) in both offensive and defensive strategies. Cybercriminals are increasingly weaponizing AI across various attack phases, creating a new era of sophisticated threats that organizations must be prepared to face 13.
Large language models (LLMs) are being utilized to craft hyper-personalized phishing emails by scraping targets' social media profiles and professional networks. Generative adversarial networks (GANs) are producing convincing deepfake audio and video content to bypass multi-factor authentication systems. Automated tools like WormGPT enable even less skilled attackers to launch polymorphic malware that evolves to evade signature-based detection 13.
AI-powered attacks have already resulted in significant financial losses and security breaches. In one notable incident, a Hong Kong-based company fell victim to a $25 million theft through the use of deepfake video conferencing technology 13. The 2024 breach of a major cloud service provider, AWS, demonstrated the potential of AI-powered malware to systematically map network architecture, identify vulnerabilities, and execute complex attack chains, compromising thousands of customer accounts 13.
To combat these evolving threats, cybersecurity experts recommend several key strategies:
Implement Zero-Trust Architecture: Traditional security perimeters are no longer sufficient. A zero-trust approach, which operates on a "never trust, always verify" principle, ensures that every user, device, and application is authenticated and authorized before accessing resources 13.
Leverage AI for Defense: While AI poses challenges, it also offers powerful defensive tools. AI-driven security solutions can analyze vast amounts of data in real-time, identifying anomalies and potential threats that traditional methods might miss 123.
Build AI-Native Security Systems: Cybersecurity leaders should focus on building systems with AI integrated from the ground up, rather than treating it as an add-on feature 13.
Human error remains a significant vulnerability in cybersecurity. As AI-generated phishing and social engineering attacks become more convincing, organizations must prioritize employee education and awareness 13.
Regular Training: Conduct frequent sessions to help staff recognize suspicious activities and evolving AI-powered threats 13.
Establish Clear Guidelines: Provide employees with clear guidelines on the use of AI tools to reduce the risk of internal vulnerabilities 13.
Manage "Shadow AI": Address the risks associated with unsanctioned or unmonitored use of AI applications, which can lead to potential data leaks or compliance issues 13.
The cybersecurity landscape is evolving into an AI arms race, where both attackers and defenders are leveraging advanced technologies to gain an edge. As Anand Raghavan, VP Products, AI for the AI Software and Platforms Group at Cisco, notes, "It has become more important than ever to use the latest in advancements in AI to be able to identify these new kinds of threats and to automate the remediation of these threats" 2.
Organizations are increasingly turning to AI-powered security tools to stay ahead of sophisticated cyber adversaries. These tools offer continuous and self-optimizing monitoring at a scale that manual monitoring cannot match, enabling security teams to analyze data from various sources across a company's entire ecosystem and detect unusual patterns or suspicious traffic in near real-time 2.
As the cybersecurity landscape continues to evolve, organizations must remain vigilant and adaptive in their approach to security. By embracing AI-driven defense strategies and addressing both technological and human factors, businesses can better protect themselves against the growing threat of AI-powered cyberattacks in 2025 and beyond.
Reference
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
As AI-driven cyber threats evolve, organizations are turning to advanced technologies and zero-trust frameworks to protect identities and secure endpoints. This shift marks a new era in cybersecurity, where AI is both a threat and a critical defense mechanism.
2 Sources
2 Sources
As AI transforms the cybersecurity landscape, businesses are increasingly adopting AI-powered defenses to combat sophisticated AI-driven threats, highlighting both the risks and opportunities in this technological arms race.
3 Sources
3 Sources
Cisco launches AI Defense to address the widening gap between adversarial AI and defensive AI, offering real-time monitoring, model validation, and policy enforcement at scale.
2 Sources
2 Sources
Experts discuss the potential of AI in bolstering cybersecurity defenses. While AI shows promise in detecting threats, concerns about its dual-use nature and the need for human oversight persist.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved