2 Sources
2 Sources
[1]
Cisco Foundation AI Advances Agentic Security Systems for the AI Era
As artificial intelligence becomes increasingly autonomous and embedded across enterprise environments, securing AI systems has emerged as a defining challenge for the industry. Cisco is addressing this challenge by advancing agentic security systems that combine reasoning, adaptive retrieval, and human oversight to support real world security operations at scale. Through recent innovations from Cisco Foundation AI, including the Foundation-sec-8B-Reasoning model, adaptive retrieval framework for AI search, and the PEAK Threat Hunting Assistant, Cisco is establishing leadership in secure, agentic AI systems designed specifically for cybersecurity use cases. These efforts reflect Cisco's broader commitment to enabling customers to adopt AI with confidence, transparency, and control, while ensuring security remains foundational. Traditional AI systems primarily operate through single step inference. Agentic systems, by contrast, are designed to pursue objectives over time, reason across multiple steps, adapt to new information, and interact safely with enterprise tools and data. In cybersecurity, this shift is especially consequential. Security operations depend on correlating signals across logs, configurations, threat intelligence, and organizational context, while maintaining explainability and accountability to human operators. Cisco Foundation AI is focused on delivering the core capabilities required for these systems, ensuring that agentic AI strengthens security outcomes without compromising trust, governance, or operational safety. Effective agentic systems begin with the ability to reason through complex problems. In cybersecurity, this requires understanding how signals across logs, configurations, code, and threat intelligence relate to one another over time. The Foundation-sec-8B-Reasoning model establishes this foundational capability. It is the first open-weight reasoning model designed specifically for cybersecurity workflows, enabling structured, multi-step analysis across tasks such as threat modeling, attack path analysis, configuration review, and incident investigation. Unlike general purpose reasoning models, Foundation-sec-8B-Reasoning is trained to reflect the analytical processes used by security practitioners. By producing explicit reasoning traces alongside its outputs, the model allows analysts to understand how conclusions are reached, supporting trust, validation, and informed decision making. This transparency is essential for agentic systems operating in high impact security environments. Reasoning alone is not sufficient if an agent cannot effectively gather and evaluate evidence. Security analysis often involves navigating large, fragmented, and evolving information spaces, where the relevance of data becomes clear only after intermediate findings are examined. Our AI search framework extends the reasoning foundation by enabling adaptive information retrieval. Rather than relying on static, one time queries, the framework allows models to iteratively refine their search strategy based on evidence encountered during the retrieval process. It supports reflection, backtracking, and strategic query revision, enabling compact models to explore complex information spaces with greater accuracy and efficiency. For security teams, this capability improves threat intelligence analysis, accelerates incident response, and supports proactive vulnerability research across diverse data sources. By tightly coupling retrieval behavior with reasoning, Foundation AI's framework enables agentic systems to continuously adjust their approach as new information emerges. When reasoning and adaptive retrieval are combined, they enable agentic systems that can support real world security operations. The PEAK Threat Hunting Assistant demonstrates how these capabilities come together in practice. PEAK applies structured reasoning and adaptive retrieval to one of the most time intensive aspects of security operations: threat hunt preparation. Using teams of cooperating agents, PEAK conducts public and private intelligence research, refines hypotheses, identifies relevant data sources, and generates structured, step by step hunt plans tailored to the user's environment. Human oversight remains central to the system's design. Security analysts guide the process, validate findings, and incorporate organizational context at every stage. With its bring-your-own-model optionality and user-controlled data access architecture, PEAK provides flexibility while maintaining enterprise governance and data security. Together, these capabilities illustrate how Cisco Foundation AI is moving beyond individual models to deliver cohesive agentic systems that reason, retrieve, and act in support of security practitioners. Collectively, the Reasoning model, AI search framework, and PEAK reflect how Cisco Foundation AI is delivering disproportionate impact by addressing foundational challenges at the intersection of AI and security. Cisco's approach emphasizes open, security-native foundations, enterprise deployability, and architectural rigor. As agentic AI systems become central to enterprise operations, Cisco is ensuring that security, transparency, and control are built into these systems from the outset. This work reinforces Cisco's leadership in Security for AI and its commitment to enabling customers to adopt advanced AI technologies safely and responsibly. Keep up with the latest from Foundation AI on our webpage.
[2]
Cisco Foundation AI debuts agentic security tools to protect autonomous AI systems - SiliconANGLE
Cisco Foundation AI debuts agentic security tools to protect autonomous AI systems Cisco Foundation AI, Cisco System Inc.'s research and engineering group focused on building foundational artificial intelligence technologies, today announced a suite of new agentic security tools designed to help enterprises secure AI systems as they become more autonomous, interconnected and embedded across business operations. The new releases are focused on allowing AI systems to reason through complex security problems, retrieve evidence dynamically from fragmented data sources and execute security workflows while maintaining transparency and human oversight. The new tools seek to address the issue whereby security teams are grappling with the challenge of protecting AI-driven environments that increasingly span cloud platforms, internal systems and external data sources. The Cisco Foundation AI team argues that traditional security approaches struggle to keep up with the speed and complexity of modern AI workflows, particularly as agentic AI systems begin to make decisions and take actions on their own. Leading the list of announcements is the launch of Foundation-sec-8B-Reasoning, an open-weight reasoning model built specifically for cybersecurity use cases. The model, unlike general-purpose language models, is optimized for multi-step security analysis, including threat modeling, attack path analysis and incident investigation. The model produces explicit reasoning traces that allow analysts to understand how conclusions are reached, supporting validation, trust and regulatory requirements in high-impact security environments. Cisco also introduced its Adaptive AI Search Framework, a reasoning-driven information retrieval system designed to move beyond static query-based searches. The framework allows AI models to iteratively refine their search strategies as new information emerges, much like a human security expert would. The framework is designed to improve threat intelligence analysis and incident response by allowing AI systems to adapt their investigation paths when dealing with incomplete, noisy or fragmented data sources. The third release today, the PEAK Threat Hunting Assistant, is an open-source agentic AI assistant aimed at automating threat hunting preparation. The assistant uses teams of cooperating AI agents to research threat actors and techniques, analyze internal security data and generate customized, step-by-step threat hunt plans. Cisco Foundation AI notes that PEAK helps keep humans in the loop to allow security teams to retain control over decisions, models and data access. "Collectively, the reasoning model, AI search framework and PEAK reflect how Cisco Foundation AI is delivering disproportionate impact by addressing foundational challenges at the intersection of AI and security," said Yaron Singer, vice president of AI and security at Cisco Foundation AI, in a blog post. "Cisco's approach emphasizes open, security-native foundations, enterprise deployability and architectural rigor," added Singer. "As agentic AI systems become central to enterprise operations, Cisco is ensuring that security, transparency and control are built into these systems from the outset."
Share
Share
Copy Link
Cisco Foundation AI unveiled three agentic security tools designed to secure increasingly autonomous AI systems across enterprises. The releases include Foundation-sec-8B-Reasoning, an open-weight cybersecurity reasoning model, the Adaptive AI Search Framework for dynamic evidence retrieval, and PEAK Threat Hunting Assistant for automated threat hunt preparation. These tools prioritize transparency and human oversight while addressing the complexity of modern AI-driven security operations.
As artificial intelligence systems become more autonomous and embedded across enterprise environments, securing these AI-driven operations has emerged as a critical challenge. Cisco Foundation AI is addressing this need by introducing agentic security systems that combine reasoning capabilities, adaptive retrieval mechanisms, and human oversight to support real-world security operations at scale
1
. The announcement includes three major releases: the Foundation-sec-8B-Reasoning model, the Adaptive AI Search Framework, and the PEAK Threat Hunting Assistant, all designed to protect autonomous AI systems while maintaining transparency and control2
.
Source: Cisco
Security teams are grappling with the challenge of protecting AI-driven environments that increasingly span cloud platforms, internal systems, and external data sources. Traditional security approaches struggle to keep up with the speed and complexity of modern AI workflows, particularly as agentic systems begin to make decisions and take actions independently
2
. Unlike traditional AI systems that operate through single-step inference, agentic security systems are designed to pursue objectives over time, reason across multiple steps, adapt to new information, and interact safely with enterprise tools and data1
.The Foundation-sec-8B-Reasoning model represents the first open-weight reasoning model designed specifically for cybersecurity workflows. Unlike general-purpose language models, this model is optimized for multi-step cybersecurity analysis including threat modeling, attack path analysis, configuration review, and incident investigation
1
. The model produces explicit reasoning traces alongside its outputs, allowing analysts to understand how conclusions are reached and supporting validation, trust, and regulatory requirements in high-impact security environments2
.
Source: SiliconANGLE
This transparency is essential for AI security operations where explainability and accountability to human operators remain paramount. The model is trained to reflect the analytical processes used by security practitioners, enabling structured analysis across tasks that correlate signals across logs, configurations, code, and threat intelligence over time
1
.The Adaptive AI Search Framework extends beyond static query-based searches by enabling reasoning-driven information retrieval. Security analysis often involves navigating large, fragmented, and evolving information spaces where the relevance of data becomes clear only after intermediate findings are examined
1
. The framework allows AI models to iteratively refine their search strategies as new information emerges, much like a human security expert would approach an investigation2
.By supporting reflection, backtracking, and strategic query revision, the framework enables compact models to explore complex information spaces with greater accuracy and efficiency. This capability improves threat intelligence analysis, accelerates incident response, and supports proactive vulnerability research across diverse data sources
1
. The framework is designed to adapt investigation paths when dealing with incomplete, noisy, or fragmented data sources, addressing a common challenge in modern security operations2
.Related Stories
The PEAK Threat Hunting Assistant demonstrates how reasoning and adaptive retrieval capabilities combine in practice for real-world security operations. This open-source agentic AI assistant automates threat hunting preparation by using teams of cooperating AI agents to research threat actors and techniques, analyze internal security data, and generate customized, step-by-step threat hunt plans
2
. PEAK applies structured reasoning and adaptive retrieval to one of the most time-intensive aspects of security operations: threat hunt preparation1
.Human oversight in AI remains central to PEAK's design. Security analysts guide the process, validate findings, and incorporate organizational context at every stage. With its bring-your-own-model optionality and user-controlled data access architecture, PEAK provides flexibility while maintaining enterprise governance and data security
1
. This approach allows security teams to retain control over decisions, models, and data access while benefiting from automated preparation workflows2
.According to Yaron Singer, vice president of AI and security at Cisco Foundation AI, these releases reflect how the company is "delivering disproportionate impact by addressing foundational challenges at the intersection of AI and security." Singer emphasized that "Cisco's approach emphasizes open, security-native foundations, enterprise deployability and architectural rigor," ensuring that security, transparency, and control are built into agentic systems from the outset
2
. As agentic AI systems become central to enterprise operations, these tools position organizations to adopt AI with confidence while ensuring security remains foundational to their AI strategy.Summarized by
Navi
1
Policy and Regulation

2
Technology

3
Technology
