Curated by THEOUTPOST
On Fri, 2 May, 12:05 AM UTC
7 Sources
[1]
RSAC 2025: Why the AI agent era means more demand for CISOS
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More While over 20 vendors announced agentic AI-based security agents, apps and platforms at RSAC 2025, the most insightful news from the conference is a rare, encouraging trend for security leaders. For the first time in three years, overall cybersecurity effectiveness has improved. Scale Venture Partners (SVP) recently released the 2025 Cybersecurity Perspectives Report, which shared that the average effectiveness of cybersecurity protections improved for the first time in three years, increasing to 61% efficacy this year from 48% in 2023. According to the report, "70% of security leaders were most protected against general phishing attacks, with only 28% of firms reporting compromise." SVP also found that 77% of CISOs believe protecting AI/ML models and data pipelines is a priority to improve their security posture by 2025, up from 55% last year. Notably, given the influx of new agentic AI solutions announced at RSAC, 75% of firms expressed interest in leveraging AI to automate SOC investigations using AI agents to triage large volumes of security alerts to prevent security incidents. SVP's rise in efficacy numbers isn't accidental; they result from CISOs and their teams adopting automation at scale while successfully consolidating their platforms and reducing gaps attackers had walked through in the past. "If you don't have complete visibility, the attackers are going to go through the cracks between products," Etay Maor, senior director of security strategy at Cato Networks, told VentureBeat during RSAC 2025. "We designed our platform to eliminate those blind spots -- bringing security and networking together so nothing escapes our eyes." Agentic AI is moving fast beyond minimum viable product to platform DNA Maor's perspective explains why a new definition of what a minimum viable product is needed for agentic AI in cybersecurity. RSAC 2025 revealed how mature agentic AI is becoming. There's a group of vendors using agentic AI as a code-based adhesive to unify code bases and apps together, and then there are the ones who have been at this for years, and agentic AI is core to their code base and architecture. Cybersecurity providers in this latter group, where agentic AI is core to their platform and, in many cases, continue to double-down their R&D spend on excelling at agentic AI. This includes Cato Networks' SASE Cloud Platform, Cisco AI Defense, CrowdStrike's Falcon single agent architecture, Darktrace's Cyber AI Loop, Elastic's Elastic AI Assistant, Microsoft's Security Copilot and Defender XDR Suite, Palo Alto Networks' Cortex XSIAM, SentinelOne's Singularity Platform and Vectra AI's Cognito Platform. Organizations that are relying on integrated AI-driven detection with automated containment are reducing dwell times by over 40%. They're also nearly twice as likely to neutralize phishing-based intrusions before lateral movement occurs. Vendors on the show floor often relied on identity and access management scenarios to showcase how their agentic AI workflows could help trim workloads for security operations center (SOC) analysts. "Identity is going to be a critical element of AI throughout its life cycle. AI agents are going to need identities. They're going to need to understand zero trust, and how do we verify them? Explicitly manage least privileged access," noted Microsoft's Corporate Vice President for Security, Vasu Jakkal, during her keynote. As Jakkal succinctly put it, "AI must first start with security. It's critical that we evolve our security mechanisms as rapidly as we evolve AI." A common theme of every agentic AI demo across the show floor was triangulating attack data, quickly gaining insights into the form of tradecraft being used and then defining a containment strategy all in real time. CrowdStrike showed how agentic AI can pivot from detection to real-time action through a live investigation of a North Korean threat campaign to place remote DevOps hires in strategic technology companies in the U.S. and around the world. The live demo followed the tradecraft of the DPRK's Famous Chollima as it impersonated a remote DevOps hire, slipped past HR checks and leveraged legitimate tools, including RMM software and VS Code, to quietly exfiltrate data. It was a sharp reminder that, while powerful, agentic AI still relies on a human in the loop to spot adaptive threats and fine-tune models before the signal gets lost in the noise. The gen AI goal: discovering nation-state tradecraft and killing it It's the attacks that no person, company, or nation sees coming that are the most devastating and challenging to contain and overcome. The thought of threats so devastating that they could easily shut down a power grid, payment, banking, or supply chain system dominates the minds of many of the brightest and most innovative technologies in cybersecurity. Cisco's Chief Product Officer Jeetu Patel emphasized the urgency of strengthening cybersecurity with AI so that threats lurking that may be devastating once triggered can be found now and neutralized. "AI is fundamentally changing everything, and cybersecurity is at the heart of it. We're no longer dealing with human-scale threats; these attacks are occurring at machine scale," Patel said during his keynote. Patel emphasized that AI-driven models are not deterministic: "They won't give you the same answer every single time, introducing unprecedented risks." CISOs need to understand today's complex risks and threats "This isn't another AI talk, I promise," CrowdStrike CEO George Kurtz joked as he opened his RSAC 2025 keynote. "I was asked to give one, and I said, 'How about we talk about something that actually matters right now, like getting CISOs a seat at the board table?'" That punchline delivered two things at once: comic relief and a sharp pivot to the defining issue of cybersecurity leadership in 2025. In his keynote, "The CISO's Guide to Securing a Board Seat," Kurtz issued a clear call to action: "Cybersecurity is no longer a compliance suggestion. It's a governance mandate. The SEC regulations have materially changed the arc of the CISO's career." Boards aren't just evolving; they're being forced to reckon with cyber risk as a primary business threat. Kurtz backed his argument with hard numbers: 72% of boards say they're actively seeking cybersecurity expertise, but only 29% actually have it. "That's not just a talent gap," Kurtz said. "It's an opportunity if you're ready to step up," he encouraged the audience. His roadmap for CISOs to reach the boardroom was tactical and hands-on: Kurtz traced the path from regulatory reform to boardroom impact by revisiting how Sarbanes-Oxley in 2002 transformed CFOs into solid boardroom contributors. He argued that the SEC's 2024 breach reporting mandate does the same for CISOs. "Threats drive regulation, and regulation drives board composition," he said. "This is our moment." His advice wasn't abstract. He urged CISOs to study proxy statements, identify committee-level needs and network strategically with board members who are "always looking to fill roles." He pointed to CrowdStrike CISO Adam Zoller, now on the board of AdventHealth, as a model. Zoller, Kurtz says, is someone who earned his seat by staying in the room, learning how the board operated and being seen as more than a security expert. Kurtz closed with a challenge: "I hope to come back in ten years, still with red hair, and see CISOs on 50% of boards, just like CFOs. The boardroom's not waiting for permission. The only question is: will it be you?" "AI isn't magic -- It's math" Diana Kelley, CTO of Protect AI, drew one of the most significant early crowds at RSAC 2025 with a blunt message: "AI isn't magic -- it's math. And just as we secure software, we must rigorously secure the AI lifecycle." Her keynote provided a sound background that sliced through gen AI hype, spotlighting the real risks to AI models that every organization needs to defend against before beginning any work on their models. Kelly provided in-depth insights into model poisoning, prompt injections and hallucinations, calling for a full-stack approach to AI security. She introduced the OWASP Top 10 for gen AI, emphasizing the need to secure AI from day zero, partner with CISOs early, threat-model aggressively and treat prompts, outputs and agent chains as privileged attack surfaces. Palo Alto Networks announced its intent to acquire Protect AI the same day as Kelley's presentation, another factor driving so many conversations about her keynote. RSAC 2025 shows why it's time for agentic AI to deliver results RSAC 2025 made one thing clear: AI agents are entering security workflows, but boards want proof they work. For CISOs under pressure to justify spending and reduce risk, the focus is shifting from innovation hype to operational impact. The real wins, including 40% lower dwell time and phishing resilience reaching 70%, came from platform consolidation and automating alert triage, which are all proven technologies and techniques. Agentic AI's moment of truth is here, especially for vendors just entering the market.
[2]
SOC teams take note: The open-source AI that delivers tier-3 analysis at tier-1 costs
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More With cyberattacks accelerating at machine speed, open-source large language models (LLMs) have quickly become the infrastructure that enables startups and global cybersecurity leaders to develop and deploy adaptive, cost-effective defenses against threats that evolve faster than human analysts can respond. Open-source LLMs' initial advantages of faster time-to-market, greater adaptability and lower cost have created a scalable, secure foundation for delivering infrastructure. At last week's RSAC 2025 conference, Cisco, Meta and ProjectDiscovery announced new open-source LLMs and a community-driven attack surface innovation that together define the future of open-source in cybersecurity. One of the key takeaways from this year's RSAC is the shift in open-source LLMs to extend and strengthen infrastructure at scale. Open-source AI is on the verge of delivering what many cybersecurity leaders have called on for years, which is the ability of the many cybersecurity providers to join forces against increasingly complex threats. The vision of being collaborators in creating a unified, open-source LLM and infrastructure is a step closer, given the announcements at RSAC. Cisco's Chief Product Officer Jeetu Patel emphasized in his keynote, "The true enemy is not our competitor. It is actually the adversary. And we want to make sure that we can provide all kinds of tools and have the ecosystem band together so that we can actually collectively fight the adversary." Patel explained the urgency of taking on such a complex challenge, saying, "AI is fundamentally changing everything, and cybersecurity is at the heart of it all. We're no longer dealing with human-scale threats; these attacks are occurring at machine scale." Cisco's Foundation-sec-8B LLM defines a new era of open-source AI Cisco's newly established Foundation AI group originates from the company's recent acquisition of Robust Intelligence. Foundation AI's focus is on delivering domain-specific AI infrastructure tailored explicitly to cybersecurity applications, which are among the most challenging to solve. Built on Meta's Llama 3.1 architecture, this 8-billion parameter, open-weight Large Language Model isn't a retrofitted general-purpose AI. It was purpose-built, meticulously trained on a cybersecurity-specific dataset curated in-house by Cisco Foundation AI. "By their nature, the problems in this charter are some of the most difficult ones in AI today. To make the technology accessible, we decided that most of the work we do in Foundation AI should be open. Open innovation allows for compounding effects across the industry, and it plays a particularly important role in the cybersecurity domain," writes Yaron Singer, VP of AI and Security at Foundation. With open-source anchoring Foundation AI, Cisco has designed an efficient architectural approach for cybersecurity providers who typically compete with each other, selling comparable solutions, to become collaborators in creating more unified, hardened defenses. Singer writes, "Whether you're embedding it into existing tools or building entirely new workflows, foundation-sec-8b adapts to your organization's unique needs." Cisco's blog post announcing the model recommends that security teams apply foundation-sec-8b across the security lifecycle. Potential use cases Cisco recommends for the model include SOC acceleration, proactive threat defense, engineering enablement, AI-assisted code reviews, validating configurations and custom integration. Foundation-sec-8B's weights and tokenizer have been open-sourced under the permissive Apache 2.0 license on Hugging Face, allowing enterprise-level customization and deployment without vendor lock-in, maintaining compliance and privacy controls. Cisco's blog also notes plans to open-source the training pipeline, further fostering community-driven innovation. Cybersecurity is in the LLM's DNA Cisco chose to create a cybersecurity-specific model optimized for the needs of SOC, DevSecOps and large-scale security teams. Retrofitting an existing, generic AI model wouldn't get them to their goal, so the Foundation AI team engineered its training using a large-scale, expansive and well-curated cybersecurity-specific dataset. By taking a more precision-focused approach to building the model, the Foundation AI team was able to ensure that the model deeply understands real-world cyber threats, vulnerabilities and defensive strategies. Key training datasets included the following: This tailored training regimen positions Foundation-sec-8B uniquely to excel at complex cybersecurity tasks, offering significantly enhanced accuracy, deeper contextual understanding and quicker threat response capabilities than general-purpose alternatives. Benchmarking Foundation-sec-8B LLM Cisco's technical benchmarks show Foundation-sec-8B delivers cybersecurity performance comparable to significantly larger models: By designing the foundation model to be cybersecurity-specific, Cisco is enabling SOC teams to gain greater efficiency with advanced threat analytics without having to pay high infrastructure costs to get it. Cisco's broader strategic vision, detailed in its blog, Foundation AI: Robust Intelligence for Cybersecurity, addresses common AI integration challenges, including limited domain alignment of general-purpose models, insufficient datasets and legacy system integration difficulties. Foundation-sec-8B is specifically designed to navigate these barriers, running efficiently on minimal hardware configurations, typically requiring just one or two Nvidia A100 GPUs. Meta expands open-source AI security with AI Defenders suite Meta also underscored its open-source strategy at RSAC 2025, expanding its AI Defenders Suite to strengthen security across generative AI infrastructure. Their open-source toolkit now includes Llama Guard 4, a multimodal classifier detecting policy violations across text and images, improving compliance monitoring within AI workflows. Also introduced is LlamaFirewall, an open-source, real-time security framework integrating modular capabilities that includes PromptGuard 2, which is used to detect prompt injections and jailbreak attempts. Also launched as part of LlamaFirewall are Agent Alignment Checks that monitor and protect AI agent decision-making processes along with CodeShield, which is designed to inspect generated code to identify and mitigate vulnerabilities. Meta also enhanced Prompt Guard 2, offering two open-source variants that further strengthen the future of open-source AI-based infrastructure. They include a high-accuracy 86M-parameter model and a leaner, lower-latency 22M-parameter alternative optimized for minimal resource use. Additionally, Meta launched the open-source benchmarking suite CyberSec Eval 4, which was developed in partnership with CrowdStrike. It features CyberSOC Eval, benchmarking AI effectiveness in realistic Security Operations Center (SOC) scenarios and AutoPatchBench, which is used to evaluate autonomous AI capabilities for identifying and fixing software vulnerabilities. Meta also launched the Llama Defenders Program, which provides early access to open-AI-based security tools, including sensitive-document classifiers and audio threat detection. Private Processing is a privacy-first, on-device AI piloted within WhatsApp. ProjectDiscovery's Nuclei: Community-Driven, open-source security innovation At RSAC 2025, ProjectDiscovery won the award for the "Most Innovative Startup" in the Innovation Sandbox, highlighting its commitment to open-source cybersecurity. Its flagship tool, Nuclei, is a customizable, open-source vulnerability scanner driven by a global community that rapidly identifies vulnerabilities across APIs, websites, cloud environments and networks. Nuclei's extensive YAML-based templating library includes over 11,000 detection patterns, 3,000 directly tied to specific CVEs, enabling real-time threat identification. Andy Cao, COO at ProjectDiscovery, emphasized open-source's strategic importance, stating: "Winning the 20th annual RSAC Innovation Sandbox proves open-source models can succeed in cybersecurity. It reflects the power of our community-driven approach to democratizing security." ProjectDiscovery's success aligns with Gartner's 2024 Hype Cycle for Open-Source Software, which positions open-source AI and cybersecurity tools in the "Innovation Trigger" phase. Gartner recommends that organizations establish open-source program offices (OSPOs), adopt software bill-of-materials (SBOM) frameworks, and ensure regulatory compliance through effective governance practices. Actionable insights for security leaders Cisco's Foundation-sec-8B, Meta's expanded AI Defenders Suite, and ProjectDiscovery's Nuclei together demonstrated that cybersecurity innovation thrives most when openness, collaboration and specialized domain expertise align across company boundaries. These companies and others like them are setting the stage for any cybersecurity provider to be an active collaborator in creating cybersecurity defenses that deliver greater efficacy at lower costs. As Patel emphasized during his keynote, "These aren't fantasies. These are real-life examples that will be delivered because we now have bespoke security models that will be affordable for everyone. Better security efficacy is going to come at a fraction of the cost with state-of-the-art reasoning."
[3]
Balancing act: Cybersecurity industry moves quickly to adopt AI for defense while speed of attacks escalates - SiliconANGLE
Balancing act: Cybersecurity industry moves quickly to adopt AI for defense while speed of attacks escalates The cybersecurity community is walking a tightrope with artificial intelligence: It's balancing a desire to embrace AI as a useful tool for strengthening protection against attacks and taking action to mitigate an emerging new category of risk that widespread adoption of AI will bring. This clash of competing interests is playing out as security professionals must move quickly to make the right calls for how AI can best be used in preventing attacks while responding to new threats and vulnerabilities brought on by adoption of the same tools by hostile nation-states and malicious actors. The robot-augmented workforce is coming, and it will be accompanied by a new paradigm for how to defend a suddenly unpredictable computing environment. "AI is the hardest challenge that this industry has seen," Jeetu Patel, executive vice president and chief product officer at Cisco Systems Inc., said during his keynote address at RSAC 2025 Conference in San Francisco this week. "The AI architecture is going to be completely different. We've inserted the model layer. It's nondeterministic, it's unpredictable. This opens up a whole new class of risks that we haven't seen before." The scope of risks associated with AI adoption is still being determined, but the RSAC gathering provided a few hints at what security researchers have discovered so far. There is a growing body of evidence that AI is being adopted by threat actors and they're moving fast. On Wednesday, Rob Lee, chief of research and head of faculty at the SANS Institute, explained this for the RSAC audience. "MIT research now shows that adversarial agent systems are executing attack sequences 47 times faster than human operators, with 93% success rates in privileged escalation paths," Lee said. "These AI systems don't just work faster. They are systematically identifying structural weaknesses in your own organization, not within weeks, not within months, but within seconds." One of the areas where weaknesses can be exploited is within the AI models themselves. The cybersecurity firm HiddenLayer Inc. published a report last week which documented a transferable prompt injection technique that bypassed safety guardrails across all major frontier AI models. This followed earlier research from Cisco in which security analysts were able to "jailbreak" the DeepSeek AI model, a technique used to bypass controls designed to avoid having AI teach a user how to build a bomb, for example. "One hundred percent of the time we were able to jailbreak the model," said Cisco's Patel. "A lot of these models start to get jailbroken because they can be tricked." There is also the prospect of users within organizations employing AI models to tap critical data sources without approval, a process that has been dubbed "shadow AI." A study released by Software AG found that at least half of employees were using unauthorized AI tools in their organizations. "This issue of shared responsibility and who owns it is a big deal right now," said John Dickson, chief executive of Bytewhisper Security Inc., during one RSAC session. "Why does shadow AI exist? It's the CEO's fear of missing out, that's why. I don't think we've had a major [shadow AI] breach yet. It's going to happen." When that breach does occur, will it change organizational attitudes toward AI's use in critical systems, from healthcare and government to the financial world and major services such as water and electrical power? That's unlikely, according to Bruce Schneier, author of "A Hacker's Mind" and a Harvard University fellow, said Tuesday. Schneier believes that generative AI's conversational interface will lead users to form bonds of trust and familiarity that hackers are likely to exploit. "We're going to think of them as friends when they are not," Schneier said. "An adversary is going to manipulate the AI output. There will be an incentive for someone to hack that AI. We are already seeing Russian attacks that deliberately manipulate AI training data. People will use and trust these systems even though they are not trustworthy." A presumption of trust will force cybersecurity providers to deploy new solutions. This includes AI agents, intelligent pieces of software, that can perform a wide range of enterprise tasks. On Monday, IBM Corp. announced the release of a new X-Force Predictive Threat Intelligence, or PTI, agent for ATOM, an agentic AI system that provides investigation and remediation capabilities. The agent will generate predictive threat insights on potential adversarial activity and minimize manual threat hunting efforts. "Where the gaps are is what is attractive for the hackers to come in," Suja Viswesan, vice president of Software Development for IBM, said in an interview at RSAC with SiliconANGLE. "With generative AI, it's critical that security becomes front and center for every aspect of the business. I do believe that we have a strength in doing that." Earlier this week, Cisco announced the launch of its first open-source security model from the newly formed Foundation AI group. Foundation-sec-8b, designed to build and deploy AI-native workflows across the security lifecycle, is an 8 billion-parameter large language model that will be accessible to users in the Hugging Face Inc. repository. The security community has also been focused on providing tools for developers to reduce security debt, the accumulation of large amounts of vulnerabilities and weaknesses in systems or software. Microsoft Corp.'s developer platform GitHub Inc. has introduced security campaigns with Copilot Autofix to reduce the backlog and prevent new vulnerabilities from being added to code. "What the developer is getting is a fix," Marcelo Oliveria, the new security product leader for GitHub, told SiliconANGLE. "We have an opportunity to help people get clean and stay clean. We believe this is a differentiator in why we are going to win this battle." This flurry of activity in recent weeks underscores the realization among security professionals that robust AI tools will be needed to counteract what's coming from threat actors. The stage is set for a new level of robotic attacks, and the cybersecurity world is embracing AI to meet the challenge. "We're going to have autonomous hacking systems roaming on the Internet," Menny Barzilay, co-founder and CEO of Milestone Inc., said during a panel discussion hosted by Cloudflare Inc. "We have to build autonomous security systems. I don't think we have any other alternatives."
[4]
At RSAC, AI disrupts the cybersecurity status quo - SiliconANGLE
At this week's RSAC 2025, the premier cybersecurity conference, the talk was all about replatforming security, and how AI agents may affect that trend. Interestingly, the push-pull impact of generative artificial intelligence helping both attackers and defenders may actually make people and their insights more important than ever to provide adequate protection. Tech company earnings showed more concern about tariffs and their impact, with widely varying results and, most important, weak or uncertain outlooks for the most part. Investors have mostly been sanguine about most results so far, with the likes of Microsoft and Meta seeing their stocks rise. And so far they don't seem massively bothered by the impact of tariffs, but clearly they're having an impact on many companies, from Snap to Supermicro to Amazon, and that surely will get worse in the second quarter. Big Tech AI competition is intensifying: OpenAI looks to take on Google's cash cow by providing shopping features to ChatGPT's search capabilities. Meta announced a standalone AI app powered by its Llama AI model. And China is rising fast, not just Deepseek either: This week Alibaba claimed leadership with its AI reasoning model, and Xiaomi released a capable open-source model. What are reasoning models, exactly? Paul Gillin explains why they're the next big thing in generative AI, even if their full impact is not yet clear. (Perhaps they might avoid sex talk with teens or obsequious responses?) IBM looks to spend $150 billion in the U.S. in the next five years, though some wonder if it's something it would have done anyway but, like some other tech companies, is providing a "gesture" to the Trump administration. A last big slug of first-quarter earnings arrives next week, including AMD, Arm, Datadog, Palantir, Cloudflare, Fortinet, Uber and Lyft, and more, so we'll get a better sense of how major suppliers are looking at tariffs and the economy. And it will be a very busy week for events, including IBM Think, ServiceNow Knowledge, Nutanix .NEXT, SAS Innovate and FICO World, all of which theCUBE will be covering onsite, and SiliconANGLE will have the big news. You can hear more about this and other news on John Furrier's and Dave Vellante's weekly podcast theCUBE Pod, out later today on YouTube. Here's what else happened this week: Beyond autocomplete: Reasoning models raise the bar for generative AI Satya Nadella says AI is now writing 30% of Microsoft's code but real change is still many years away Research shows MCP tool descriptions can guide AI model behavior for logging and control Or lack of insight: Marc Andreessen Says One Job Is Mostly Safe From AI: Venture Capitalist Because, right, no other job requires "nuanced combination of 'intangible' skills." Yeesh, the hubris of these guys (always guys). ChatGPT goes after Google in online shopping OpenAI to make ChatGPT less creepy after app is accused of being 'dangerously' sycophantic Microsoft releases small but mighty Phi-4 reasoning AI models that outperform larger models Meta announces standalone AI app for personalized assistance Google's AI Mode in search just got more useful and accessible Alibaba claims leadership in AI reasoning with latest Qwen3 models China AI rising: Xiaomi releases new MiMo-7B models as DeepSeek upgrades its Prover math AI UIPath plunges into agentic AI with development and orchestration platform StarTree boosts AI agent support in its real-time analytics platform Anthropic updates Claude with new Integrations feature, upgraded research tool Writer announces Palmyra X5 LLM with 1M-token context window to power AI agents Distributed app platform Akka targets agentic AI with flexible deployment options Dataminr reveals agentic AI roadmap with launch of Intel Agents for real-time decision-making Acceldata can now spot data anomalies across multiple dimensions Dyna Robotics debuts DYNA-1 foundation model for powering robots Sendbird launches omnipresent proactive customer support AI agent FutureHouse Platform brings super-intelligent AI research tools to scientists via web and API Um, yikes: Meta's 'Digital Companions' Will Talk Sex With Users -- Even Children (per the Wall Street Journal) Fivetran to acquire Census to extend platform with reverse ETL and data activation Supio raises $60M to power legal analysis with generative AI Astronomer nabs $93M for its data pipeline platform Lightrun raises $70M to use AI for real-time enterprise software observability and remediation There's even more AI and big data news on SiliconANGLE From SaaS to Service as Software: Inside Marc Benioff's plan to upend enterprise software Perspective from Gartner: Navigating the future of application architecture: Embracing gen AI, platform engineering and security by design IBM commits to investing $150B in the US over the next five years Cloud optimization startup Cast AI raises $108 million to achieve 'almost unicorn' valuation Earnings: Tariffs' bite hits outlooks, and some stocks: Amazon's stock declines on light guidance and third successive cloud revenue miss Microsoft delivers impressive earnings beat, showing strength in AI and cloud Apple's stock falls as Tim Cook admits it's 'difficult to predict' tariff impact Meta Platforms crushes Wall Street's earnings and revenue targets NXP's stock slumps as CEO Kurt Sievers reveals he'll retire later this year F5 shares climb after earnings and revenue exceed analyst forecasts Samsung beats profit and revenue targets thanks to strong AI smartphone sales Supermicro's stock plunges on weak preliminary guidance Cyber split: Tenable drops on a weak outlook as Commvault gains on a strong one Freshworks shares rise nearly 10% on earnings beat and strong outlook Seagate stock rises 7% as earnings beat estimates, guidance tops expectations Snap stock tumbles as company withholds Q2 outlook citing macro concerns Qualcomm beats expectations, but its stock wobbles on light revenue forecast Confluent shares fall after-hours on reduced outlook despite revenue and earnings beats Equinix beats expectations in first quarter and raises full year outlook OpenText expands restructuring initiative after larger-than-expected sales drop Robinhood beats on earnings and revenue, but monthly user count declines Block shares tumble after earnings and revenue miss and lowered guidance Atlassian shares drop sharply on slowing growth and wider quarterly loss Intel shares new details about upcoming Intel 14A process, packaging technologies HPE boosts Aruba security and data sovereignty features for private clouds Breaking Analysis: RSAC highlights security markets in transition Balancing act: Cybersecurity industry moves quickly to adopt AI for defense while speed of attacks escalates RSAC kickoff analysis: Agentic AI and replatforming will be key topics at this week's conference TheCUBE's day two analysis from RSAC: AI cybersecurity tools spark urgent debate over defense strategies Security DataANGLE: ETR data shows posture management rising in strategic importance Palo Alto Networks buys Protect AI for reported $500M+, debuts new cybersecurity tools Google unveils expanded AI-driven security capabilities and new threat intelligence at RSAC Cisco unveils new AI-driven security innovations at RSAC 2025 to address growing threat complexity And a deeper look from Zeus Kerravala: At RSAC 2025, Cisco announces bevy of security announcements to leverage its strength in networking CrowdStrike introduces new tools for blocking malicious AI models, data exfiltration The rise of agentic AI: CrowdStrike CEO George Kurtz on defending against faster, smarter digital superusers Nvidia introduces DOCA Argus to bring real-time threat detection to AI infrastructure Bitwarden debuts Access Intelligence to strengthen credential security and phishing defense Recorded Future launches Malware Intelligence to automate malware detection and response Cequence expands Unified API Protection platform to secure agentic AI interactions Apiiro debuts dynamic software mapping to streamline vulnerability management Google report finds drop in zero-day exploitation in 2024 but warns enterprise risks are rising Identity verification startup Persona raises $200M at $2B valuation Veza reels in $108M for its identity security platform NetFoundry raises $12M to bring secure, zero-trust networking to cloud applications More cybersecurity news here Congress passes Take It Down Act to combat deepfakes Court finds that Apple breached 2021 injunction with App Store rules Amazon launches first batch of operational Project Kuiper satellites Defense-focused space startup True Anomaly raises $260 million IXI raises $36.5M to develop the world's first autofocus eyewear Jerry Dischler, Google's president of cloud applications and leader of Google Workspace, is leaving after nearly two decades for parts unknown (per CRN). Thursday, May 8: Netscout, Appian, JFrog, Dropbox, Rackspace, Cloudflare, Coinbase, Lyft, RingCentral
[5]
AI agents may battle AI attackers, but at RSAC 2025, it's still about improving security workflow - SiliconANGLE
AI agents may battle AI attackers, but at RSAC 2025, it's still about improving security workflow At first glance, as you wander onto this expansive Moscone Center expo floor, witness some of the near-million-dollar booths, you might wonder if the only kind of budget consideration on the chief information security officer's mind is something related to responding to new artificial intelligence threats with AI agents -- or perhaps worrying about monster trucks or goats crushing or eating a user's mobile phone. But beyond the glitz, AI agents can barely scratch the surface on the shortage of skilled cybersecurity talent available to address the exploits our software faces today. In this regard, newer AI-based security tools are really just the next incremental boost in automation toward protecting an exponentially expanding attack surface exacerbated by AI. Attending the 34th annual RSAC 2025 in San Francisco with 44,000 others, I really started to understand why the organizers would pick a theme like "Many Voices. One Community." It will take people from different walks of life, many of whom did not envision themselves as cybersecurity professionals, working together, to get us out of this AI cybersecurity mess we've created. Here's a rundown of some themes expressed at the conference and a sampling of interesting information resources and vendors I talked to that are addressing modern challenges with unique approaches and products: Fundamentally, most security solutions (SIEM, SOAR, UEBA, XDR and the like) are data management solutions -- as all threats and vulnerabilities can only be perceived through data movement and activity within volumes and networks, which emit telemetry signals such as logs and traces. OpenText was one of the major brands there with a broad cybersecurity portfolio that blends both enterprise and consumer threat awareness data behind the scenes. Through machine learning, its new OpenText Core TDR platform can detect difficult-to-spot unique insider threats from privileged users, whether or not they are using AI tools. Checkmarx announced an early access program for its agentic AI-powered control plane for application security posture management or ASPM), which declaratively scans all packages in the repository, to show developers prioritized vulnerabilities and unknown code references directly within the developer's IDE. Bot security management vendor Netacea recently donated its BLADE open-source framework, which is now accepted by the OWASP community, allowing experts to recognize business logic attack definitions for errant automation and AI agent behaviors beyond the currently known CVEs referenced in MITRE ATT&CK. Black Duck offers a broad set of open source and managed tools for software composition and code analysis, citing a Ponemon survey about the risk data information technology organizations are currently using to inform their software security supply chains. "As a CISO, we are looking for AI to help humans triage alerts and focus the subject matter expert's attention," said Bruce Jenkins, chief information security officer at Black Duck. "However, I'm concerned that the industry will rush down the road of AI under the pretense that it is going to solve all of our problems." They say "you can't judge a book by its cover," but sometimes, attackers armed with AI can discover a lot more than you expect, and innovative vendors are answering the call. Cool startup MirrorTab can obfuscate any browser interface from AI agents and browser extension bots using a super-lean video encoding and "pixel shimmering" technology that stymies on-screen text recognition, script injection attempts and eavesdropping plugins, while still being speedily rendered for the end user. Well-known password management vendor LastPass is extending its awareness of browser-based logins to provide enterprise-wide discovery and awareness of employee SaaS application usage, which also presents a novel way to prevent unsanctioned "rogue AI" app services training on company and customer data. While most security vendors are hunting for incoming attacks and internal threats, BrandShield provides an outside-in approach with its AI-driven external threat protection service, which continuously trawls the world's online swamp of suspicious domains, social media and the dark web for impersonators, phishing scammers and intellectual property thieves, then issues takedown orders to offending entities. By now, everyone's heard of two-factor authentication, which is now known to be just a fig leaf's worth of protection from social engineering, deepfakes and suspicious communications. "We do more than 50-factor authentication, and one of those factors is an algorithm called rPPG, where you join a call, we have a bot that joins the call, and it measures the blood circulation in your cheeks and your forehead to see if you're real or if you're fake. But that's just one inference point," said Sandy Kronenberg, chief executive officer of netarx. Their AI models also review and roll in the user's IP address, GPS, domain provider, DKIM, SPF, DMARC and more, reporting back to the user through what they call a "Flurp" which is sort of like a traffic light, telling them to stop or slow down if something's fishy on the other end of a call or session. It stands to reason we'd see dedicated corporate governance tools for AI at RSAC, and Zenity provides an observability platform that monitors the enterprise's estate of copilots and chatbots, restricting the availability of sensitive or private data from modeling and data design activities. The solution informs owners of AI agents at runtime if a model or "rogue" agent seems to be operating outside policy within or outside of a private network, reporting results and any remediation actions taken to platforms such as Splunk or ServiceNow. Tufin offered some of the first hybrid cloud ready, software-driven firewalls and security policy engines on the market. At RSAC, it announced a new AI agent feature called TufinMate. Network engineers can just chat with it in Teams or Slack, and the agent will search across the topology to pinpoint root causes for application outages or identify the criticality of vulnerabilities within the context of the network that they manage. Monitoring live data from most known discovery and configuration management tools, RedSeal Inc. provides an inventory and interactive network map of cloud, virtual and physical devices, connections, host configurations and endpoints down to Layer 2. Prioritized risky attack paths can be blocked automatically by policy or sent to SIEM or incident management platforms for resolution. While there are plenty of mature static and dynamic application security testing, or SAST/DAST, tools on the market, AI development tooling and code generation presents a whole new set of challenges to cyber teams. "Now we are seeing new attack vectors that can be leveraged of LLM vulnerabilities getting introduced into the world that didn't exist before," said Gadi Bashvitz, CEO of Bright Security. "What if a bad actor asks an OpenAI model to share an employee's credentials, or for a recipe for making napalm?" The security testing and remediation vendor scans application programming interfaces and AI code generator output to assure that AI-generated code matches application intent, then recommends and validates fixes. "You can try and anchor all the activities required for an AI to create or learn a new process before and after taking [a security] action," Monzy Merza, CEO of Crogl Inc., said at a roundtable. "You can look at logs and inspect the APIs and container-to-container traffic, but there's a better need to have a real argument as to how AI really happens, or we are too abstracted." "There's a lot of excitement around AI, but there's also a lot of hype, pitching agents and black boxes that are silver bullets but won't solve all your problems," said Thomas Kinsella, co-founder of Tines, who was demonstrating their new AI-powered Workbench solution. "You need a deterministic approach, you need guardrails, and you need a human in the loop. And then you need the AI to be able to succeed consistently, doing a task that it's really good at." Blocking employees from using "shadow AI" services is going to be even harder than preventing them from signing up for SaaS and cloud services, because there is so much fear of falling behind without AI. "Our whole goal is getting away from 'allow and block' to helping companies safely adopt AI, so if we can gain visibility into how employees are using AI and what data they are sharing with it, we can create controls and policies," said Randy Birdsall, co-founder and chief technology officer of SurePath AI. "We can actually help adoption with our bring-your-own-model approach where someone could go to Vertex or they could go to AzureAI or they could leverage Bedrock and use those model gardens to bring their model to us, and we can give them a fully managed portal experience or a managed rack solution with group-based RBAC around the data that's being brought into that RAG experience." Vercel is quickly rising as a platform for rapidly deploying and scaling web applications that increasingly leverage vetted AI inference models and agents from their marketplace. "I think the reality is, everyone is accelerating with AI," said Vercel CISO Ty Sbano. "Our AI product v0.dev enables people to just prompt their way or vibecode to an output. A big part of that is because we're indexing on React and nextJS natively. By having more customers doing this, and by dogfooding internally with our own employees, we are able to accelerate the journey from less AI code hallucinations to greater accuracy." As we've seen in software delivery, testing and observability workflows, simulation always follows automation. That's why ethical internal hacker teams are setting up virtual kill chains to prove out application readiness. "GenAI is by definition not creative, it's reductive. An LLM makes generalizations to predict the next word, or the next step in a sequence, said Ann Nielsen, product marketing lead at Cobalt, an offensive security service provider and producer of an annual State of Pentesting report. "We automate what we can to make humans more efficient, so they don't have to read irrelevant scans all day, but human pentesters really are better at running novel and interesting attacks." SafeBreach allows companies to map current exposures and past attack paths, continuously running breach and attack simulation of things like credential theft and the lateral movement of virtual bad actors. Generative AI-based simulations can further attempt to gain a foothold and grab sensitive data or encrypt assets within a pre-production or live system. "People here are talking about agentic AI, but really, there's still this overarching theme where there's just never enough people to do the work, and there's also a shortage of experience -- even existing security practitioners have only seen what they've seen," said Debbie Gordon, CEO of Cloud Range. It just announced a partnership with IBM to create "cyber campuses" and give students simulation-based training experiences on lifelike virtual networks populated with threat actors and vulnerabilities. In the end, AI-based security tooling will never be able to save us from relentless AI-based attacks. It's really going to take human expertise, education and awareness to save our digital circulation system from the evolving threats we have introduced. It's a good thing the RSAC event can independently cultivate a cybersecurity community of so many unique practitioners, vendors and end user companies at this summit. We will absolutely need to work together across organizations and nations to face an infinite number of bad actors with new and novel attack capabilities, thanks to the introduction of AI.
[6]
Top Execs At RSAC 2025: Embracing AI Is Now 'Not Optional'
The rise in cyberattacks powered by GenAI and the need for automating more security processes has led AI to move from buzzword to reality in a short amount of time, cybersecurity executives tell CRN at RSAC 2025. The rise in cyberattacks powered by GenAI and the need for automating more security processes has led AI to move from buzzword to reality in a short amount of time, cybersecurity executives told CRN at RSAC 2025 this week. While this is the third year in a row where AI was a dominant theme at RSAC, industry executives said the increasing mainstream adoption of the technologies has made AI a more natural conversation at this year's conference. [Related: RSAC 2025: AI Is Changing Everything For Security -- Except The Hard Problems] Ultimately, both from a business and a security perspective, it's now an imperative for organizations to utilize AI, executives said in interviews with CRN at RSAC 2025 in San Francisco. "Last year there was certainly a lot of buzz about it, but it was companies sandboxing, piloting -- a lot of evaluation," said Kevin Lynch, CEO of Optiv, No. 25 on CRN's Solution Provider 500. "This year, it's about, 'We're in mainstream production, and we have a lot to protect and a lot to think about doing here.'" Without a doubt, there is much less eye-rolling when AI gets brought up in a conversation at RSAC this year, according to Kevin Simzer, COO at Trend Micro. Instead, given that there's now real applications for customers, "there's a genuine interest in how to enable this," Simzer said. "Because they're all facing the same competitive environment." There's no question that for most organizations, "you have to do this for efficiency," Optiv's Lynch said. "And from a customer satisfaction and customer experience basis, to move faster, you have to do this. So the days of contemplation, I think, are over." Organizations also have little choice about whether to utilize AI-powered tools for their own security, executives said. Instances of new cyberattacks that are unquestionably aided by GenAI are surging, said Lee Klarich, chief product officer at Palo Alto Networks. "We can tell that AI is being used by attackers in order to build and launch more new attacks every day," Klarich said. "At the same time, we have data that shows that the time from the initial attack to breach is getting shorter and shorter. So you have more attacks that are happening faster." In response, "it's very clear that the answer to this [is that] we have to leverage AI much, much more than we traditionally have as an industry," he said. While some security vendors have chosen to largely ignore AI for their products, "they probably won't be here much longer or coming back to RSA," said Daniel Bernard, chief business officer at CrowdStrike. "In a lot of other industries, I think embracing AI is optional," Bernard said. "In cybersecurity, AI is not optional." The rapid pace of change wrought by AI -- now set to enter a bold new phase with the emergence of agentic -- also makes it even more essential that security has a central role rather than being an afterthought, executives said. What more of the industry should be thinking about is how to use the autonomous technologies themselves to securely configure new AI systems as they are created, SentinelOne co-founder and CEO Tomer Weingarten told CRN. "Even the discovery and the configuration of surfaces needs to be automatic, ideally autonomous," Weingarten said. "The future that we envision is one where these systems -- and it's only a question of time -- are going to be so autonomous in their operation that we need to start programming them to actually do the entire heavy-lifting as well." In other words, "we need to start thinking about how we use the systems themselves to deploy securely to begin with -- because these systems, what they give us, more than anything else, is scale," he said. "They give us the ability to see everything. They give us the ability to cover everything and to react to everything in a manner that [even] an unlimited amount of humans almost can't." The arrival of agents is likely to be a far more substantial shift for cybersecurity than LLMs have been, according to Ami Luttwak, co-founder and CTO of Wiz. "Implementing native language interfaces is nice, but it's not revolutionary," Luttwak told CRN. "Right now agents feels like it's a much bigger revolution, because it can impact your team." Security vendors and professionals also do not have the luxury of waiting too long before working to ensure that the new systems related to AI and agentic are themselves secure, he said. "Now everything moves faster, so we have to move faster in security," Luttwak said. "We can't wait five years." The bottom line on AI is that, for the foreseeable future, it's something that no organization will be able to ignore if they want to be successful, said Daniel Kendzior, global data and AI security practice leader at Accenture, No. 1 on CRN's Solution Provider 500. "If you're not in the game [with AI], you're going to be wildly behind your competitors," Kendzior said. "And then from a security perspective, you're going to be wildly behind the threat actors."
[7]
Here's What 15 Top Cybersecurity Execs Are Saying About AI: RSAC 2025
CRN spoke with C-level executives at leading players in cybersecurity -- including SentinelOne, Palo Alto Networks and CrowdStrike -- about their biggest AI-related discussions during RSAC 2025. Here's what they had to say. The usefulness of GenAI for cybersecurity has grown massively over just the past year, even as demand for enabling AI usage by employees -- rather than simply blocking it -- has also surged. At the same time, cybersecurity vendors are already well along in exploring how to protect the next big leap in AI technologies with the emergence of agentic capabilities, top cybersecurity industry executives told CRN at RSAC 2025 this week. [Related: Top Execs At RSAC 2025: Embracing AI Is Now 'Not Optional'] The question of the year, according to CrowdStrike Chief Business Officer Daniel Bernard, is, "Can I trust an agent to do something for me?" The reality is that in just two years, AI has moved from being largely a buzzword in cybersecurity to the point where many organizations are considering, "Can I trust an AI agent to operate a security program or part of a security program for me?" Bernard told CRN. "That's the evolution -- the crawl, the walk and the run -- that I see happening in security as relates to AI." The CEOs of companies including SentinelOne, Optiv, SailPoint, Proofpoint, Akamai, Trellix and NightDragon, as well as the top technology and product leaders at companies including Palo Alto Networks, Wiz and Trend Micro -- plus top executives from a number of other companies -- also spoke with CRN about the future of AI and security this week. What follows are comments from CRN's interviews with 15 top cybersecurity executives, focused on their biggest AI-related discussions during RSAC 2025. I almost think that in six months, in nine months, you're going to see, again, a shift in how people on-board AI to the enterprise. If you'd asked that question a year ago, 'Hey, how are you using AI in your enterprise?' Then people would say, 'Oh, the chatbot, the LLM' -- [but] now we're talking about agentic. In a year it's going to be something else [in terms of] these systems that are going to be on-boarded. So the models of how you secure them you have to believe are going to change. [That] is why I think if people are kind of jumping into a bandwagon today where they say, 'This is what I'm going to do. I'm going to go with this technology or that technology' -- I think that might lock them into a place where it might not be agile and flexible enough to support what comes next. Last year [at RSAC] there was certainly a lot of buzz about [AI], but it was companies sandboxing, piloting -- a lot of evaluation. This year, it's about, 'We're in mainstream production, and we have a lot to protect and a lot to think about doing here.' ... You can see it show up in traditional security categories like credentialing and vaulting and identity governance, which is already an area where the average enterprise client is probably behind the curve a little bit. Now, every time you're putting an AI agent in place, you have a credential creation and management issue. ... I think this notion of credential management -- this is the battlefront right here. This is where we're going to fight the war. Lee Klarich, Chief Product Officer, Palo Alto Networks How are we using AI to protect our customers? We have data [showing] how many more new attacks per day we are seeing created year over year. [It's a] 300 percent increase. So we can tell that AI is being used by attackers in order to build and launch more new attacks every day. At the same time, we have data that shows that the time from the initial attack to breach is getting shorter and shorter. So you have more attacks that are happening faster. It's very clear that the answer to this [is that] we have to leverage AI much, much more than we traditionally have as an industry. Daniel Bernard, Chief Business Officer, CrowdStrike 'Can I trust an agent to do something for me?' -- I think that's the question of this year. 'Can I trust an AI agent to operate a security program or part of a security program for me?' Two years ago, it was like 'AI, AI, buzz, buzz.' Last year was, 'I'm starting to see different things where I can glean information. It's better than a Google search. It'll talk back to me. I can tell it to do things. And this year, I think it's all, 'Can this thing run some process from start to finish without me having to be involved?' That's the evolution -- the crawl, the walk and the run -- that I see happening in security as relates to AI. I think there is an understanding [and] expectation in every security team, including Fortune 500, that they will have SecOps agents, AI agents, running with their security tools. ... With GenAI, it was all about taking the Wiz interface and making it more natural language -- 'Show me all of the attacks. Show me all the vulnerabilities.' Now, in the agentic discussion, it's a different discussion. It says, 'I have a security team. I want to start embedding an agent that will do things for me. How can this agent automatically do things in Wiz?' That's a very different discussion. So I think that's a much bigger revolution. Implementing native language interfaces is nice, but it's not revolutionary. Right now agents feel like [they are] a much bigger revolution because [they] can impact your team. You're seeing a seminal moment, I believe, in cybersecurity that has only come along in my career two or three times [over] 25 years. And the seminal moment is a new technology set that creates a massive amount of risk and a massive amount of opportunity. It reminds me of 1999 to some degree, or any year right before dotcom blew up. But what did you see in the aftermath of the dotcom? You saw massive new threats. You saw massive new cyber opportunities. You saw cyber companies become real. And I ran one of them, McAfee. And you saw cloud computing change the landscape. And what came out of that? A $32 billion acquisition of Wiz by Google. I think the next few years you're going to see something like that, maybe bigger this time because AI has got the legs to be an integrated part of the fabric of everything we do. That's exciting. That's where I felt all the optimism this week. I think [agents] are going to help a lot of people do things faster and better. [I don't believe] a third of the American workforce is going to get wiped out by AI. I think that's crazy. Remember the 'four-hour work week' -- we were going to get to a four-hour work week because we were going to get so productive with technology? We keep finding more stuff to do. So I think that will be the case here. AI will make us more productive, make people more effective. It will absolutely just do some stuff that people used to do -- just like factory automation took some jobs away. You don't have to hire a guy to turn lug nuts anymore. A robot does that -- but you had to hire people to run the machine that does the lug nuts. [In AI] there's going to be people driving a lot of this intelligence. What is the potential of this? Will AI effectively become this extended layer of humans? You can call them 'virtual humans.' And that spurs up some interesting questions [around] how human-centric security extends to virtual humans. As humans, we all get socially engineered in attacks. Well, AI gets prompt engineering attacks. That's a form of social engineering for AI. Humans leak information and we lose our credentials, or people steal credentials from us. Well, in the world of AI, they can also lose information, and AI technologies also can lose tokens. So there are some similarities that extend from humans to virtual humans. And as AI comes into production, and they become sort of these copilots of people who are already doing certain functions, there are some interesting cyber questions that come in [such as] how do you ensure what you did for humans get extended to these virtual humans that are coming into the workplace? Tom Leighton, Co-Founder, CEO, Akamai Technologies With customers, I would say it's about first identifying where they have AI apps being used and exposed they didn't know about. You could imagine a big enterprise, same problem they have with APIs -- everybody is doing something with AI, and it's not really coordinated or controlled. And so the first issue is visibility -- what have you got out there? And then the next issue is, you've got to secure it -- and it's not the normal firewall or API security rules. It's different kinds of exploits are taking place. In all the ways that AI is helping our business, AI will be helping bad actors as well. How do we stay in front of novel threats that are coming into our environment? Because that's truly what many of our customers are starting to think more about. It's not just the standard threats that we've always known are coming. Because AI is starting to super-power the bad actors as well, how do we make sure that we continue to remain one step in front of them? And it's very much aligned to how we protect cloud environments. So the big topics are stopping novel [threats], especially through email, and then protecting cloud environments. We do these road-show experiences, and I was at one of them last summer. I asked the 50 CISOs in the room, 'What are you doing around AI?' All 50 of them were just blocking [AI]. Ten months later, we're having a lot more discussions about enterprises embracing AI. They figured out that shadow AI is everywhere. Blocking doesn't work, and they really need to figure out how to enable this. So they're actually using it now, and they're looking for some controls [to put] in place. So I feel like just in 10 months the conversations [shifted to], 'How do I do this in a safe and secure way?' Cyber no longer is about defending against people. It's about defending against machines. It's about defending against the scale at which AI operates. We also see AI as a massive opportunity to drive productivity to our customers. So it's that two-pronged approach, and that is consistent with every conversation you've been having at the conference. One, 'Help us protect at the pace and scale of AI.' And the second is, 'We know we're moving to a world where AI is going to be used, is going to be operationalized -- help us do that in a safe way.' What customers are wondering about in our world -- and recognize that our world has APIs, custom code and libraries, and there's thousands of vulnerabilities in there -- so they're asking, 'How can we possibly fix those thousands of vulnerabilities? And can we leverage AI to do that?' You would think that that's possible because they've been exposed now for a year to things like Copilot -- their developers are using these tools to assist in their coding, and they've seen some productivity gains as a result of that. The issue is that, if you know how to prompt, that's how you get [the AI] to do things for you. So the question is, how do you create a prompt, as a vendor, to serve up to the developer something to go do? That's an interesting conversation, and we're having a lot of that conversation. The CTO is saying, 'I don't know what I have. I've lost control. I don't have visibility. I don't have any inventory of my AI in code.' The CISO problem -- and I hear it again and again -- is, 'How can I govern my developers without impacting their development philosophy?' This is the huge pain for the CISO. Every security program starts with visibility and putting controls in place before they even think about the advanced attacks. So I think it's [the same in] the AI era. [Ultimately] it's not only an AI problem -- it's a 'software development in the AI era' problem. One of the key concepts that keeps getting discussed in this whole agent AI ecosystem is about role-based access control. What kind of roles are you assigning to these agents -- whether it's just a detection action, whether it's a response action, whether it's an analysis action -- what role are you providing? And based on that, what kind of access are you providing -- either to access the data or to act on a detection? That's a big area.
Share
Share
Copy Link
At RSAC 2025, the cybersecurity industry grapples with the rapid adoption of AI for both defense and attacks, while open-source models emerge as a collaborative solution to complex threats.
The RSA Conference (RSAC) 2025 showcased a significant shift in the cybersecurity industry, with artificial intelligence (AI) taking center stage. The event highlighted both the potential of AI in strengthening defenses and the new risks it introduces to the digital security landscape 123.
Scale Venture Partners' 2025 Cybersecurity Perspectives Report revealed an encouraging trend: overall cybersecurity effectiveness improved for the first time in three years, increasing from 48% in 2023 to 61% in 2025 1. This improvement is attributed to CISOs and their teams adopting automation at scale while successfully consolidating platforms and reducing vulnerabilities.
The conference saw over 20 vendors announcing AI-based security agents, apps, and platforms 1. These agentic AI solutions are designed to automate SOC investigations, triage large volumes of security alerts, and prevent security incidents. Notable examples include:
A significant development at RSAC 2025 was the introduction of open-source AI models specifically designed for cybersecurity. Cisco's Foundation AI group unveiled Foundation-sec-8B, an 8-billion parameter Large Language Model built on Meta's Llama 3 architecture 2. This model, purpose-built for cybersecurity applications, aims to foster community-driven innovation and collaboration among traditionally competing cybersecurity providers 2.
While AI offers powerful tools for defense, it also introduces new risks and challenges:
Despite the rapid advancement of AI in cybersecurity, experts at RSAC 2025 emphasized that human insight and expertise remain critical. The shortage of skilled cybersecurity professionals continues to be a challenge, with AI tools seen as a complement to human expertise rather than a replacement 5.
As the cybersecurity industry rapidly adopts AI, it must also address the new vulnerabilities and attack vectors that emerge. The focus is shifting towards creating more robust, AI-native workflows across the security lifecycle while maintaining human oversight and expertise 123. The collaborative approach exemplified by open-source models like Cisco's Foundation-sec-8B may pave the way for more unified and effective defenses against increasingly sophisticated cyber threats 2.
Reference
[1]
[2]
[3]
[4]
As AI-driven cyber threats evolve, organizations are turning to advanced technologies and zero-trust frameworks to protect identities and secure endpoints. This shift marks a new era in cybersecurity, where AI is both a threat and a critical defense mechanism.
2 Sources
2 Sources
Experts discuss the potential of AI in bolstering cybersecurity defenses. While AI shows promise in detecting threats, concerns about its dual-use nature and the need for human oversight persist.
2 Sources
2 Sources
As AI enhances cyber threats, organizations must adopt AI-driven security measures to stay ahead. Experts recommend implementing zero-trust architecture, leveraging AI for defense, and addressing human factors to combat sophisticated AI-powered attacks.
4 Sources
4 Sources
The generative AI cybersecurity market is projected to reach $40.1 billion by 2032, with tech giants leading the way. Meanwhile, ethical hackers at DEF CON highlight potential vulnerabilities in AI systems.
2 Sources
2 Sources
A comprehensive look at the current state of AI adoption in enterprises, highlighting challenges, opportunities, and insights from industry leaders at Cisco's AI Summit.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved