24 Sources
24 Sources
[1]
Claude Code Security Causes A SaaS-Pocalypse In Cybersecurity
We have seen this pattern before, even if the specifics look different. Think back to the day AWS introduced GuardDuty, when Microsoft folded Defender for Endpoint into its enterprise licensing commitments and launched Microsoft Sentinel, or when Google acquired Mandiant and eventually Wiz. Sure, the launch of fully autonomous AI agents that can ingest entire enterprise codebases and surface hundreds of previously unknown flaws in a single sweep feels novel, yet the strategy is familiar. AI companies are vying to prove they can collapse that disruption window from years to months by using their own innovations. Forrester warned years ago that hyperscalers were not building security features to complement the market, they were building to replace it. The model was simple: bundle the capability into spend the enterprise was already committed to, neutralize renewal cycles, and force every incumbent to defend pricing that no longer matched value. Our Security Platform research - which Jess Burn and Jeff Pollard will feature in a talk at RSAC 2026 - shows that ease of integration, , productivity gains, and the benefits in productivity from automation are key drivers of security platform consolidation efforts. In addition, the AI coding agent providers offering application security tools to identify vulnerabilities that could have been written by their own assistants and agents, is a top trend in our application security trends report, which will be publishing soon. Anthropic's February 202026 announcement is the next iteration of the same playbook. A Market Shock That Is Both Accurate And Overreaching At The Same Time The market's reaction to the Claude Code Security launch was fast and blunt. JFrog dropped nearly a quarter of its value in a single session. Okta, CrowdStrike, and ZScaler lost meaningful ground. The Global X Cyber ETF closed at its lowest point in over two years. The 24% hit to JFrog was the clearest signal because its value proposition depends on specialized software supply chain controls that AI agents now directly threaten. The read across to identity, runtime detection, and network security was less accurate. CrowdStrike does not analyze code. Okta does not repair injection flaws. SailPoint does not audit dataflows in distributed codebases. Their declines were sentiment contagion and the market will correct. Though those corrections can take quarters, not days. The category that took real damage is the one most perceived to be dependent on pattern matching: SAST, SCA, and ASPM. These vendors sell structured detection. Frontier models now generate meaning as a native property. Enterprises that once paid six figure contracts for rules-based scanning will not ignore platforms that include similar or better reasoning as part of a broader subscription they already purchased. We've seen this pattern before, as well: CFOs will run that math before CISOs do - and CEOs and board members will start pushing. How AI Code Security Tools From Google, OpenAI, and Anthropic Rewrite AppSec Google moved first with CodeMender, a system that blended Gemini reasoning with traditional program analysis techniques. OpenAI followed with Aardvark, which embedded semantic analysis directly into the CI pipeline. Anthropic, finally, delivered the most consequential shift by bundling Claude Code Security into an existing licensing path albeit as a research feature for now. And, complements Claud Code security review that runs on Pull Requests. When this happens, incumbents do not compete feature to feature. They compete against economics, efficiency, and productivity. The important part of these releases was not benchmark performance or the quantity or quality of fixes submitted to open source repositories, although those were noteworthy. Instead, it was how these launches collapsed the separation between engineering and security. When systems that write code can also reason about flaws in that code, iterate, and correct inside the same workflows, traditional development and AppSec boundaries will erode. The Counterpunch Arrived the Same Day: AWS Kiro And Autonomous Agent Risk Market euphoria for autonomous agents met a hard reality check when the Financial Times reported that Amazon's internal AI coding tool, Kiro, caused a thirteen hour outage by deleting and recreating a production environment. Amazon blamed user error due to excessive permissions, but that explanation falls flat when humans in the loop are the primary preventative control autonomous agents rely on. The reality is that an autonomous agent made an irreversible choice. The permissions mattered only because the agent made a decision no reasonable human developer would have made. As discussed in keynotes and tracks at Forrester Security & Risk 2025 and our AEGIS research: Users are predictable. Their willpower is finite. Agents are relentless. Their willpower is infinite. This is the tension every enterprise will face. Autonomy creates value and risk. Vendors point to guardrails, human in the loop defaults, and authorization workflows. All of that is necessary, none of it is sufficient. If mistakes in permission modeling turn agentic autonomy into production impact, the real risk is not the tool...it is scale. Every seasoned practitioner knows: permission drift is the baseline condition in every mature environment. Impact On Application Security And Legacy Security Processes Security teams cannot wait for the market to stabilize. They must take inventory of their AppSec stack and confront uncomfortable questions. If a tool provides little incremental value beyond what a platform agent can already reason about, its renewal becomes a discretionary decision for the CFO and one that is difficult to argue against for CISOs. When vulnerability discovery, analysis, and remediation are handled by fast, agentic systems, traditional resource constraints begin to collapse. But only for new code that is intentionally AI‑generated, AI‑maintained, and continuously corrected. In those environments, AI agents can identify, fix, test, and deploy changes in near real time, reducing the need for human prioritization and preventing technical debt from accumulating in the first place. How these tools impact legacy code, COTS, open source, new codebases and infrastructure will also vary. They were not designed for autonomous refactoring, and most organizations lack the context, confidence, or risk tolerance to allow AI systems to make large‑scale changes without oversight. In these environments, resource constraints persist, technical debt already exists, and risk‑based prioritization remains essential. AI can surface issues and assist with analysis, but it cannot yet autonomously remediate legacy software at scale. CISOs must collaborate closely with application security professionals to assess how new tools add value and which capabilities they might replace or complement. While Claude Code Security employs reasoning to function more like a human security expert helping to identify vulnerabilities that traditional methods like fuzz testing or SAST scanners may overlook, it is not intended to replace these tools or established DevSecOps best practices. For now. Furthermore, CISOs should engage with their existing application security vendors to explore how they are integrating large language models (LLMs) into their solutions. This integration augments deterministic scanning by uncovering vulnerabilities that were previously difficult to detect, thereby enhancing overall security capabilities. In addition, application security vendors have been priortizing what must be fixed, curating remediation guidance, and offering automated pull requests for SAST and SCA findings in the developer workflow. SAST and SCA platforms have already shifted away from "find everything" toward prioritizing what actually must be fixed, curating remediation guidance, and generating automated pull requests directly in developer workflows. They increasingly account for the reality that every code change -- whether made by a human or an AI agent -- can introduce regressions, and they embed controls, validation, and context to manage that risk. Fixing every flaw remains aspirational, but managing which fixes are safe, necessary, and valuable is where AppSec tooling continues to matter. The future state is not AI replacing SAST, but AI amplifying the pressure to stop buying separate application security tools in favor of investing in agentic software development platform bundles that include security and remediation discipline as a feature. As agentic systems accelerate discovery and code change, the value of tools that constrain, validate, and contextualize remediation only increases. Do we think Anthropic is focused on conquering the appsec market? No. We think Anthropic views trust in the code it generates as a dependency and inhibitor to increase the adoption of Claude Code and these releases are designed to satisfy those concerns. This is asynchronous for now, but as Github Copilot Coding Agent shows, this can be performed synchronously during code generation. What CISOs Should Do Now About AI Agent Security Risks At the same time, enterprises must evaluate AI security tools using their full vendor risk frameworks. This includes data residency, code persistence policies, prompt caching behavior, and the reliability of the agent itself. The AEGIS framework already flags unresolved issues around agent trust boundaries and prompt injection exposure, which are already part of the attack surface. Identity, runtime detection, and network security remain essential. The market overreacted in its impact to these categories, security leaders should not (barring concerns about their portfolio of personal investments). However, this does reinforce the promise that agents will proliferate making detection of abnormal machine behavior more important, not less. Governance work cannot wait. Shadow AI already creates unsanctioned data exposure and untracked code modifications. Enterprises must define who is authorized to run autonomous agents, what audit trails must exist, and which code classes are prohibited from external processing. SOC disruption is coming next. The same companies that just entered code security will eventually automate triage and detection. CISOs must brief their boards, model scenarios, and understand that talent requirements will shift again as agentic systems move into operations. Forrester research already shows that compensation premiums for security talent with AI skills falls between 10-30%. The Bottom Line February 20, 2026 will be remembered as the day markets finally recognized that AI platforms intend to own the security value chain the same way hyperscalers did before them. They do not need to outperform incumbents. They need only to be good enough while bundled into a product the enterprise already pays for. The economics and productivity incentives will handle the rest. The cybersecurity market will not contract but value will redistribute. In much the same way that Microsoft and Alphabet became mega-security platform plays, the AI titans hope to achieve the same results. Niche solutions will become a key component of acquisitions to expand platforms but won't remain standalone companies for too long with so many vendors vying for spend and relevance. Forrester clients who want to continue this discussion or dive into Forrester's wide range of AI research can set up a guidance session or inquiry with us.
[2]
Anthropic Rolls Out Autonomous Vulnerability-Hunting AI Tool For Claude Code
Reports have come to light over the past year about how cybercriminals, including state-sponsored actors, have been able to use frontier AI models from AI firm Anthropic to find exploitable vulnerabilities within code, even in the mature and well-maintained systems of large corporations. Now, the OpenAI rival says it is rolling out similar vulnerability-identifying AI capabilities for its customers. Anthropic is launching Claude Code Security, a new cybersecurity capability for Claude Code, its popular AI coding tool. The company says the tool can scan codebases for security vulnerabilities and suggest targeted software fixes for human review. The company claims the tool differs from traditional security software because it does not rely on rule-based pattern matching -- where code is analyzed and compared to known vulnerabilities -- but instead "reasons through your code like a security researcher." Issues identified by the tool will be assigned severity ratings to help security teams prioritize, as well as "confidence rankings" indicating how certain the system is in its assessment of each risk, after automatically reexamining findings for false positives. The tool will not make changes to code directly but will add identified issues to a dashboard for human security teams to assess and act on. Claude Code Security is now available in limited research preview for Enterprise and Team customers. Those maintaining open-source code repositories can also apply for free, expedited access. Anthropic is not alone in rolling out this type of tool. OpenAI began beta testing Aardvark, an agentic security researcher powered by GPT-5, in October last year. Some may be betting traditional cybersecurity companies could be negatively impacted by the rise of tools like Anthropic's new offering. SiliconANGLE reported that cyber security firm CrowdStrike Holdings closed the trading session down almost 8% in the immediate aftermath of the announcement, while Cloudflare fell just over 8%. In an interview with PCMag in November 2025, Boris Cherny, Head of Claude Code, gave his predictions about the future of the app, hinting at how future models "are gonna run for a longer period of time without human intervention" as well as increased integrations with other AI models.
[3]
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse Anthropic sent the infosec community into a tizzy on Friday when it rolled out Claude Code Security, a new feature that scans codebases for vulnerabilities and suggests patches to fix the issues. The new security capability is currently available as a limited research preview for enterprise and team customers to test in their environments, and open-source maintainers can apply for free, expedited access. The announcement sent some cybersecurity stocks into a downward spiral and prompted much pontificating about the end of security as we know it - along with a dissenting opinion from CrowdStrike co-founder and CEO George Kurtz. His firm's shares were among those hit on Friday, closing the day down nearly 8 percent from the previous close, and Kurtz asked Claude if its new security tool could replace what CrowdStrike does (tl;dr: Claude said no). The reality, however, isn't nearly as gloomy for the security industry - nor as exciting and sexy as AI evangelists make it out to be. Yes, large language models have shown an ability to flag some pattern-based vulnerabilities at scale. Earlier this month, Anthropic claimed that Claude Opus 4.6 "found and validated more than 500 high-severity vulnerabilities" in open source code. But Claude's security feature is simply the latest and buzziest AI-enabled bug-fixing tool, meaning Anthropic is now doing what other companies at the forefront of agentic AI are also doing. When it comes to securing code, it's a move in the right direction. But it's not sufficient - humans are still required. Amazon also uses AI agents to find security flaws and suggest fixes internally. Microsoft has its own swarm of security agents that, among other tasks, prioritize vulnerability remediation, automate the identification of impacted devices, and then initiate fixes. Google, back in November 2024, said its LLM-based bug-hunting tool Big Sleep was the "first" AI to spot a memory safety vulnerability in the wild and then fix it before the buggy code's official release. More recently, it rolled out an AI agent called CodeMender that it said "automates patch creation, can identify the root cause of a vulnerability, then generate and review a working patch." Last October, OpenAI said it's privately testing Aardvark, an agentic security system based on GPT‑5, that it promises will "help developers and security teams discover and fix security vulnerabilities at scale." As is the case with Claude's code-scanning and patching tool, all of these still need a human to sign off on the fix. "Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call," Anthropic said in announcing the new feature. According to the AI developer, Claude Code Security is context-aware - as opposed to simply doing static code analysis. It "reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said. This will likely prove to be a useful tool for developers and security analysts, as researchers have repeatedly shown that AI is very good at detecting vulnerabilities. (It's also good at writing buggy code and opening up new attack vectors for criminals.) "Anything that helps developers write better, safer code is a good thing," Glenn Weinstein, CEO of supply-chain security shop Cloudsmith, told The Register. "Claude Code Security is one of many safeguards in a wide range of defenses." Isaac Evans, CEO of developer-focused security firm Semgrep, told The Register he's "very excited for Claude Code Security, even though we haven't tried it yet." "LLMs are fantastic for security and have a great opportunity to actually make a dent in the coming wave of software vulnerabilities," he said. However, the real test of these types of bug-hunting AI agents will be how well they perform at scale, according to Evans. "So far none of the foundation model companies - Big Sleep, Aardvark, OpenAI - have published detailed statistics on how many false positives they experienced to get the results they had, or the cost to do so," Evans said. "That matters: Was this a $1 million investment? $10 million? This is some level of marketing-first, science-second. We are also hearing reports from security researcher friends that of the 500 vulnerabilities, not all of them are truly 'high-severity' as described." ®
[4]
Cyber Stocks Slide as Anthropic Unveils 'Claude Code Security'
Shares of cybersecurity software companies tumbled Friday after Anthropic PBC introduced a new security feature into its Claude AI model. Crowdstrike Holdings was the among the biggest decliners, falling as much as 6.5%, while Cloudflare Inc. slumped more than 6%. Meanwhile, Zscaler dropped 3.5%, SailPoint shed 6.8%, and Okta declined 5.7%. The Global X Cybersecurity ETF fell as much as 3.8%, extending its losses on the year to 14%. Anthropic said the new tool will "scans codebases for security vulnerabilities and suggests targeted software patches for human review." The firm said the update is available in a limited research preview for now. The drop by cybersecurity stocks was the latest example of software shares slumping on concerns over competition from AI-native companies. Anxiety has been building for weeks, with iShares Expanded Tech-Software Sector ETF ETF down more than 23% this year, putting it on track for its biggest quarterly percentage drop since the financial crisis in 2008. Much of the selling has come on new AI tools released by companies like Anthropic, OpenAI, and Alphabet Inc., with investors fretting that the ability to "vibe code" -- use AI to write software code -- will allow users to create their own applications, diminishing demand for legacy products, weighing on their growth, margins, and pricing power.
[5]
Cybersecurity stocks drop for a second day as new Anthropic tool fuels AI disruption fears
CrowdStrike CEO George Kurtz speaks at the Wall Street Journal Tech Live conference in Laguna Beach, California, on Oct. 21, 2019. Cybersecurity stocks dropped for a second day on Monday as investors fretted over new artificial intelligence security tools that threaten to displace the sector's longstanding business models. Anthropic on Friday debuted a new security tool to its Claude model that the AI lab said could scan software code for vulnerabilities and suggest solutions. The move led to a freefall in cyber stocks that spilled over into Monday's session, raising concerns that these tools could replace tasks handled by cybersecurity companies. CrowdStrike and Zscaler dropped about 9% each, while Netskope declined nearly 10%. SailPoint fell 6%, while Okta, SentinelOne and Fortinet lost more than 4% each. Palo Alto Networks was last down 2%, while Cloudflare, which benefited from recent Moltbot enthusiasm, dropped 7%. The iShares Cybersecurity & Tech ETF dropped nearly 4%. In a post to LinkedIn over the weekend, CrowdStrike CEO George Kurtz defended the company's moat and said the new AI tool addresses different cybersecurity issues than the Austin-based company. "AI innovation is inspiring," he wrote. "But let's stay grounded in reality: an AI capability that scans code does not replace the Falcon platform -- or your security program. Security requires an independent, battle-tested platform built to stop breaches." New AI tools capable of quickly creating websites and apps through prompts and texts have rattled the software sector in recent months. Cybersecurity is just the latest sector to feel the pinch.
[6]
Obedient Traders Respond to Claude Code Cybersecurity Plugin by Selling Cybersecurity Stocks
Yesterday, Anthropic released a new capability for Claude Code: Claude Code Security. Claude Code can now, theoretically, dig around in your codebase and find potential vulnerabilities. In the accompanying blog post, Anthropic says this tool solves a problem: "too many software vulnerabilities and not enough people to address them." The vulnerabilities flagged via Claude Code Security are accompanied by offers of "targeted software patches for human review," the post says. But you'll never guess what happened: according to Bloomberg, Cybersecurity stocks dipped. It's almost like investors didn't read all that stuff in the blog post about how this tech is supposed to accentuate existing workflows, and help, rather than replace, workers. Per Bloomberg, the companies whose stock was sold off included CrowdStrike and Cloudflare which declined by 8% and 8.1% respectively. SailPoint declined a very concerning 9.4%. Okta went down 9.2%. And there were others. The relevant companies may be especially vulnerable because some have automated research software that truly does sound like it could be made redundant by Claude Code Security, assuming it's proven actually useful and superior. CrowdStrike, for instance, launched an AI-Powered "network vulnerability assessment" tool for CrowdStrike Falcon about a year ago. The Wall Street trend in which investors unload a given software stock the millisecond Anthropic announces a new Claude Code capability is known as the SaaSpocalypse. The SaaSpocalypse is just a narrative, and leads to all sorts of cloudy thinking. It is, for instance, more than a bit overblown to say, as this tweet does, that "Millions of jobs and companies just got replaced." But investment isn't based entirely on a rational assessment of the future value of a company. It's also based on narratives, and how investors expect other investors to respond to them, and how investors expect other investors to respond to the expectations of other investors, and on, and on. Nvidia CEO Jensen Huang stands to profit more than anyone from AI hype, and for his part, he recently called the SaaSpocalypse "the most illogical thing in the world" at a conference about two weeks ago. Huang's counter-narrative, however, is also a bit fanciful. He implies that AI agents will operate like humans and simply use human tools. "If you were a humanoid robot, would you use a screwdriver or invent a new screwdriver?" he said. Yet another narrative worth consideration is tech critic Cory Doctorow's. AI automation tools for something like radiology, he recently said in a speech, have a lot of potential as tools that could make for better overall results if added to existing processes. "But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if [it] also makes radiology more accurate," Doctorow said. Replace the concept of radiology with "cybersecurity" as you read the rest of this: The market's bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists' job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it's catastrophically wrong. In other words, jobs are cut, quality degrades, and the job of the human in the loop becomes, essentially, blame-taker when something inevitably goes wrong. And keep in mind that to an investor, it's not a worse company if cutting 90% of payroll leads to a worse result for the customer, but a more profitable company overall.
[7]
Anthropic's Claude Code Security finds 500+ vulnerabilities: how security leaders should respond
Anthropic pointed its most advanced AI model, Claude Opus 4.6, at production open-source codebases and found a plethora of security holes: more than 500 high-severity vulnerabilities that had survived decades of expert review and millions of hours of fuzzing, with each candidate vetted through internal and external security review before disclosure. Fifteen days later, the company productized the capability and launched Claude Code Security. Security directors responsible for seven-figure vulnerability management stacks should expect a common question from their boards in the next review cycle. VentureBeat anticipates the emails and conversations will start with, "How do we add reasoning-based scanning before attackers get there first?", because as Anthropic's review found, simply pointing an AI model at exposed code can be enough to identify -- and in the case of malicious actors, exploit -- security lapses in production code. The answer matters more than the number, and it is primarily structural: how your tooling and processes allocate work between pattern-based scanners and reasoning-based analysis. CodeQL and the tools built on it match code against known patterns. Claude Code Security, which Anthropic launched February 20 as a limited research preview, reasons about code the way a human security researcher would. It follows how data moves through an application and catches flaws in business logic and access control that no rule set covers. The board conversation security leaders need to have this week Five hundred newly discovered zero-days is less a scare statistic than a standing budget justification for rethinking how you fund code security. The reasoning capability Claude Code Security represents, and its inevitable competitors, need to drive the procurement conversation. Static application security testing (SAST) catches known vulnerability classes. Reasoning-based scanners find what pattern-matching was never designed to detect. Both have a role. Anthropic published the zero-day research on February 5. Fifteen days later, they shipped the product. While it's the same model and capabilities, it is now available to Enterprise and Team customers. What Claude does that CodeQL couldn't GitHub has offered CodeQL-based scanning through Advanced Security for years, and added Copilot Autofix in August 2024 to generate LLM-suggested fixes for alerts. Security teams rely on it. But the detection boundary is the CodeQL rule set, and everything outside that boundary stays invisible. Claude Code Security extends that boundary by generating and testing its own hypotheses about how data and control flow through an application, including cases where no existing rule set describes. CodeQL solves the problem it was built to solve: data-flow analysis within predefined queries. It tells you whether tainted input reaches a dangerous function. CodeQL is not designed to autonomously read a project's commit history, infer an incomplete patch, trace that logic into another file, and then assemble a working proof-of-concept exploit end to end. Claude did exactly that on GhostScript, OpenSC, and CGIF, each time using a different reasoning strategy. "The real shift is from pattern-matching to hypothesis generation," said Merritt Baer, CSO at Enkrypt AI, advisor to Andesite and AppOmni, and former CISO at Reco, in an exclusive interview with VentureBeat. "That's a step-function increase in discovery power, and it demands equally strong human and technical controls." Three proof points from Anthropic's published methodology show where pattern-matching ends and hypothesis generation begins. Commit history analysis across files. GhostScript is a widely deployed utility for processing PostScript and PDF files. Fuzzing turned up nothing, and neither did manual analysis. Then Claude pulled the Git commit history, found a patch that added stack bounds checking for font handling in gstype1.c, and reversed the logic: if the fix was needed there, every other call to that function without the fix was still vulnerable. In gdevpsfx.c, a completely different file, the call to the same function lacked the bounds checking patched elsewhere. Claude built a working proof-of-concept crash. No CodeQL rule describes that bug today. The maintainers have since patched it. Reasoning about preconditions that fuzzers can't reach. OpenSC processes smart card data. Standard approaches failed here, too, so Claude searched the repository for function calls that are frequently vulnerable and found a location where multiple strcat operations ran in succession without length checking on the output buffer. Fuzzers rarely reached that code path because too many preconditions stood in the way. Claude reasoned about which code fragments looked interesting, constructed a buffer overflow, and proved the vulnerability. Algorithm-level edge cases that no coverage metric catches. CGIF is a library for processing GIF files. This vulnerability required understanding how LZW compression builds a dictionary of tokens. CGIF assumed compressed output would always be smaller than uncompressed input, which is almost always true. Claude recognized that if the LZW dictionary filled up and triggered resets, the compressed output could exceed the uncompressed size, overflowing the buffer. Even 100% branch coverage wouldn't catch this. The flaw demands a particular sequence of operations that exercises an edge case in the compression algorithm itself. Random input generation almost never produces it. Claude did. Baer sees something broader in that progression. "The challenge with reasoning isn't accuracy, it's agency," she told VentureBeat. "Once a system can form hypotheses and pursue them, you've shifted from a lookup tool to something that can explore your environment in ways that are harder to predict and constrain." How Anthropic validated 500+ findings Anthropic placed Claude inside a sandboxed virtual machine with standard utilities and vulnerability analysis tools. The red team didn't provide any specialized instructions, custom harnesses, or task-specific prompting. Just the model and the code. The red team focused on memory corruption vulnerabilities because they're the easiest to confirm objectively. Crash monitoring and address sanitizers don't leave room for debate. Claude filtered its own output, deduplicating and reprioritizing before human researchers touched anything. When the confirmed count kept climbing, Anthropic brought in external security professionals to validate findings and write patches. Every target was an open-source project underpinning enterprise systems and critical infrastructure. Small teams maintain many of them, staffed by volunteers, not security professionals. When a vulnerability sits in one of these projects for a decade, every product that pulls from it inherits the risk. Anthropic didn't start with the product launch. The defensive research spans more than a year. The company entered Claude in competitive Capture-the-Flag events where it ranked in the top 3% of PicoCTF globally, solved 19 of 20 challenges in the HackTheBox AI vs Human CTF, and placed 6th out of 9 teams defending live networks against human red team attacks at Western Regional CCDC. Anthropic also partnered with Pacific Northwest National Laboratory to test Claude against a simulated water treatment plant. PNNL's researchers estimated that the model completed adversary emulation in three hours. The traditional process takes multiple weeks. The dual-use question security leaders can't avoid The same reasoning that finds a vulnerability can help an attacker exploit one. Frontier Red Team leader Logan Graham acknowledged this directly to Fortune's Sharon Goldman. He told Fortune the models can now explore codebases autonomously and follow investigative leads faster than a junior security researcher. Gabby Curtis, Anthropic's communications lead, told VentureBeat in an exclusive interview the company built Claude Code Security to make defensive capabilities more widely available, "tipping the scales towards defenders." She was equally direct about the tension: "The same reasoning that helps Claude find and fix a vulnerability could help an attacker exploit it, so we're being deliberate about how we release this." In interviews with more than 40 CISOs across industries, VentureBeat found that formal governance frameworks for reasoning-based scanning tools are the exception, not the norm. The most common responses are that the area was considered so nascent that many CISOs didn't think this capability would arrive so early in 2026. The question every security director has to answer before deploying this: if I give my team a tool that finds zero-days through reasoning, have I unintentionally expanded my internal threat surface? "You didn't weaponize your internal surface, you revealed it," Baer told VentureBeat. "These tools can be helpful, but they also may surface latent risk faster and more scalably. The same tool that finds zero-days for defense can expose gaps in your threat model. Keep in mind that most intrusions don't come from zero-days, they come from misconfigurations." "In addition to the access and attack path risk, there is IP risk," she said. "Not just exfiltration, but transformation. Reasoning models can internalize and re-express proprietary insights in ways that blur the line between use and leakage." The release is deliberately constrained. Enterprise and Team customers only, through a limited research preview. Open-source maintainers apply for free expedited access. Findings go through multi-stage self-verification before reaching an analyst, with severity ratings and confidence scores attached. Every patch requires human approval. Anthropic also built detection into the model itself. In a blog post detailing the safeguards, the company described deploying probes that measure activations within the model as it generates responses, with new cyber-specific probes designed to track potential misuse. On the enforcement side, Anthropic is expanding its response capabilities to include real-time intervention, including blocking traffic it detects as malicious. Graham was direct with Axios: the models are extremely good at finding vulnerabilities, and he expects them to get much better still. VentureBeat asked Anthropic for the false-positive rate before and after self-verification, the number of disclosed vulnerabilities with patches landed versus still in triage, and the specific safeguards that distinguish attacker use from defender use. The lead researcher on the 500-vulnerability project was unavailable, and the company declined to share specific attacker-detection mechanisms to avoid tipping off threat actors. "Offense and defense are converging in capability," Baer said. "The differentiator is oversight. If you can't audit and bound how the tool is used, you've created another risk." That speed advantage doesn't favor defenders by default. It favors whoever adopts it first. Security directors who move early set the terms. Anthropic isn't alone. The pattern is repeating. Security researcher Sean Heelan used OpenAI's o3 model with no custom tooling and no agentic framework to discover CVE-2025-37899, a previously unknown use-after-free vulnerability in the Linux kernel's SMB implementation. The model analyzed over 12,000 lines of code and identified a race condition that traditional static analysis tools consistently missed because detecting it requires understanding concurrent thread interactions across connections. Separately, AI security startup AISLE discovered all 12 zero-day vulnerabilities announced in OpenSSL's January 2026 security patch, including a rare high-severity finding (CVE-2025-15467, a stack buffer overflow in CMS message parsing that is potentially remotely exploitable without valid key material). AISLE co-founder and chief scientist Stanislav Fort reported that his team's AI system accounted for 13 of the 14 total OpenSSL CVEs assigned in 2025. OpenSSL is among the most scrutinized cryptographic libraries on the planet. Fuzzers have run against it for years. The AI found what they were not designed to find. The window is already open Those 500 vulnerabilities live in open-source projects that enterprise applications depend on. Anthropic is disclosing and patching, but the window between discovery and adoption of those patches is where attackers operate today. The same model improvements behind Claude Code Security are available to anyone with API access. If your team is evaluating these capabilities, the limited research preview is the right place to start, with clearly defined data handling rules, audit logging, and success criteria agreed up front.
[8]
Anthropic's new code security tool causes market panic
Why it matters: Investors are panicking and security executives are now on the defensive. But insiders don't see this as a total identity crisis for the industry. Driving the news: Anthropic on Friday announced its new Claude Code Security product, which can automatically scan codebases for vulnerabilities and suggest patches. The news caused shares of several major cyber companies to tumble. * "The Global X Cybersecurity ETF fell 4.9% and closed at its lowest since November 2023" on Friday, according to Bloomberg. * As of Monday, CrowdStrike shares have fallen 11%. Cloudflare took a 9.6% dive. Identity management company SailPoint slid about 8%. Reality check: Claude Code Security will directly compete with other code-security companies, but won't disrupt most companies' core businesses. * Corporations require a layered approach to defend against and detect cyberattacks. * Claude Code Security can help make sure that the "vibe coding" projects employees are launching are written in a more secure way -- but it can't detect when an intruder is lurking on a network or when an employee has clicked on a phishing link. What they're saying: Over the weekend, CrowdStrike CEO George Kurtz wrote on LinkedIn that "AI is powerful. It's transformative. And it absolutely makes security better. But AI doesn't eliminate the need for security. It increases it." * Before Anthropic's announcement, Palo Alto Networks CEO Nikesh Arora told analysts in an earnings call last week that he's "confused why the market is treating AI as a threat to at least cybersecurity," noting that customers are eager to strengthen their defenses as AI helps attackers get faster. Between the lines: Most cybersecurity investors and analysts see the stock tumble as an overreaction. * "Yes there is the potential for more software to be secure by design thanks to hashtag #AI (false positives from hallucinations notwithstanding), but cybersecurity is not a technological problem, it's a human one," Aaron Jacobsen, partner at NEA, wrote. * Raymond James analyst Mark Cash called the market reaction "excessive" and noted in a report that the market was likely "extrapolating well beyond the current functionality" of Claude Code Security. Our thought bubble from Axios' enterprise software deals reporter Chris Metinko: "It looks like we'll see this type of thing happen anytime a large AI lab has a new release -- overreaction will set in on the public market." The intrigue: Over the last few months, Axios has been asking VC investors about the value of their investments in standalone AI-powered code security companies. * They're betting corporate security teams likely won't want to consolidate all of their security needs into an AI provider. What to watch: Anthropic is unlikely to stop at just Claude Code Security as the model gets better at analyzing applications for security flaws. OpenAI has also been rolling out its own AI-powered security tools.
[9]
AI can now hunt software bugs on its own. Anthropic is turning that into a security tool. | Fortune
Now, instead of just scanning code for known problem patterns, Claude Code for Security can review entire codebases, more like a human expert would -- looking at how different pieces of software interact and how data moves through a system. The AI double-checks its own findings, rates how severe each issue is, and suggests fixes. But while the system can investigate code on its own, it does not apply fixes automatically, which could be dangerous in its own right -- developers must review and approve every change. Claude Code Security builds on over a year of research by the company's Frontier Red Team, an internal group of about 15 researchers tasked with stress-testing the company's most advanced AI systems and probing how they might be misused in areas such as cybersecurity. The Frontier Red Team's most recent research found that Anthropic's new Opus 4.6 model has significantly improved at finding new, high-severity vulnerabilities -- software flaws that allow attackers to break into systems without permission, steal sensitive data, or disrupt critical services -- across vast amounts of code. In fact, in testing open source software that runs across enterprise systems and in critical infrastructure, Opus 4.6 found some of these vulnerabilities that had gone undetected for decades, and was able to do so without task-specific tooling, custom scaffolding, or specialized prompting. Frontier Red Team leader Logan Graham told Fortune that Claude Code Security is meant to put this power in the hands of security teams that need to boost their defensive capabilities. The tool is being released cautiously as a limited research preview for its Enterprise and Team customers. Anthropic is also giving free expedited access to maintainers of open-source repositories -- the often under-resourced developers responsible for keeping widely used public software running safely. "This is the next step as a company committed to powering the defense of cybersecurity," he said. "We are now using [Opus 4.6] meaningfully ourselves, we have been doing lots of experimentation -- the models are meaningfully better." That is particularly true in terms of autonomy, he added, pointing out that Opus 4.6's agentic capabilities mean it can investigate security flaws and use various tools to test code. In practice, that means the AI can explore a codebase step by step, test how different components behave, and follow leads much like a junior security researcher would -- only much faster. "That makes a really big difference for security engineers and researchers," Graham said. "It's going to be a force multiplier for security teams. It's going to allow them to do more." Of course, it's not just defenders that look for security flaws -- attackers are also using AI to find exploitable weaknesses faster than ever, Graham said, so it's important to make sure that improvements favor the good guys. So in addition to the research preview, he said Anthropic is investing in safeguards to detect malicious use and when attackers might be using the system. "It's really important to make sure that what is a dual-use capability gives defenders a leg up," he said.
[10]
Cybersecurity stocks drop after Anthropic debuts Claude Code Security - SiliconANGLE
Cybersecurity stocks drop after Anthropic debuts Claude Code Security Shares of several major cybersecurity providers dropped today after Anthropic PBC introduced a tool for finding software vulnerabilities. The offering is called Claude Code Security. It's available as a limited research preview in the Enterprise and Teams editions of Anthropic's Claude artificial intelligence service. Additionally, the company plans to provide "expedited access" for open-source project maintainers. Software teams scan their code for vulnerabilities using so-called static analysis tools. Such programs are usually built around a database of rules, or definitions of common cybersecurity vulnerabilities. A static analysis tool works by checking each single snippet of code in an application against its rules. Static vulnerability definitions can't cover every single variation of every single exploit. For example, one of a static analysis tool's rules might not support a niche programming language that a developer used to implement an interface module. As a result, rule-based static analysis tools often miss certain cybersecurity issues. Anthropic positions Claude Code Security as a more effective alternative. According to the company, the tool doesn't use static rules but instead "reasons about your code the way a human security researcher would." It maps out how an application's components interact with one another and the way data moves through them to find potential weak points. Developers can activate Claude Code Security by connecting it to a GitHub repository and asking it to scan the code inside. According to Anthropic, the tool can uncover a wide range of vulnerabilities. Applications include filters that block malicious user input such as unauthorized SQL commands. Claude Code Security can find code snippets that lack an effective input filter. It also spots more sophisticated issues, such as vulnerabilities that make it possible to bypass an application's authentication mechanism. Claude Code Security ranks the security flaws that it finds based on severity. Additionally, it generates a natural language explanation of each one to expedite analysis. A "suggest fix" button below the explainer enables cybersecurity professionals to have Claude generate a patch. CrowdStrike Inc. closed the trading session down 7.56%, while Cloudflare Inc. declined 8.09%. Several other cybersecurity companies also logged share price declines. The selloff is the second that Anthropic has set off in the enterprise software ecosystem since the start of the month. The previous one was sparked by the company's launch of Claude Cowork plugins. Claude Code Security is rolling out about four months after OpenAI Group PBC introduced a cybersecurity automation tool of its own. Aardvark offers many of the same capabilities as Anthropic's new tool. According to OpenAI, it tests vulnerabilities in an isolated sandbox to estimate how difficult it would be for hackers to exploit them. There are several ways OpenAI and Anthropic could expand their cybersecurity over time. Enterprise software teams use systems called CI/CD, or continuous integration and continuous delivery, pipelines to roll out software updates. The two AI providers could integrate their cybersecurity tools with popular CI/CD products to automatically block updates that contain vulnerable code. Many established cybersecurity companies already offer such a capability.
[11]
Cybersecurity Stocks Slump After Anthropic AI Launch
Anthropic's Claude Code Security sent shockwaves through cybersecurity markets, with CrowdStrike and Palo Alto Networks among the hardest hit. Shares in leading listed cybersecurity companies have fallen since Anthropic's launch of Claude Code Security on Friday, an AI-powered code vulnerability scanner. Anthropic launched Claude Code Security on Feb. 20 as a limited research preview. According to the company website, Anthropic's chatbot Claude "scans your entire codebase for vulnerabilities, validates each finding to minimize false positives, and suggests patches you can review and approve." Claude reasons through code "like a skilled security researcher," it understands context, traces data flows, and "catches vulnerabilities that pattern-matching tools miss," before proposing a fix. Anthropic's most advanced AI model, Claude Opus 4.6, has already found more than 500 high-severity vulnerabilities that have survived decades of expert review, VentureBeat reported on Monday. ChatGPT maker OpenAI launched a new benchmark on Feb. 19 to evaluate how well different AI models detect, patch, and exploit security vulnerabilities in smart contracts. Claude Opus 4.6 came out on top. The top five US-listed information technology security companies by market capitalization have all seen heavy share price declines continue this week. Palo Alto Networks, America's largest cybersecurity company with a market capitalization of $116 billion, saw its stock (PANW) slide almost 9% since the launch. CrowdStrike, which provides endpoint security, threat intelligence, and cyberattack response services, had an even greater loss with its share prices tanking 18% since Feb. 20, erasing $20 billion in market cap. Meanwhile, California-based Fortinet, which develops and sells security products, lost 9% from its share price (FTNT) over the same period, according to Google Finance. Other leading cybersecurity firms, such as Cloudflare and Zscaler, also saw their stocks slide amid the new AI competitor. "What you're seeing today is really the continuation of a panic-driven, narrative-led selloff," Shrenik Kothari, security and infrastructure analyst at Robert W. Baird, told Reuters. "These reactions are not irrational," noted the Kobeissi Letter in a lengthy post on the threat of AI taking over the IT workforce on Tuesday. "When AI replicates what workers do, pricing power shifts to the buyer. That is the first-order impact, and it is very real." Related: Citrini's AI doom report sees software, payment stocks tumble Analysts at financial services firm Wedbush said the stock sell-off was due to "AI Ghost Trade fears." They noted that Anthropic's move into the market reinforces a broader view that cybersecurity will be a key beneficiary of the AI boom, reported Proactive on Tuesday.
[12]
What is Anthropic's Claude Code Security, and why it's rattling cybersecurity stocks
Anthropic's Claude Code Security: Cybersecurity stocks dropped up to 11% on February 23, 2026, after Anthropic launched Claude Code Security. The AI-powered code security tool scans entire codebases. It detects software vulnerabilities. It prioritizes critical risks. It recommends patches for developers. Over 500 hidden flaws were found in open-source projects. Investors fear AI disruption in cybersecurity markets. The sell-off highlights rising anxiety over AI-driven vulnerability detection reshaping enterprise software. Anthropic's Claude Code Security: On February 23, 2026, shares of major cybersecurity companies plunged as much as 11% in a single trading session after the launch of Claude Code Security by Anthropic. CrowdStrike, Datadog and Zscaler fell 11%, while Fortinet and Okta dropped 6%. SentinelOne declined 5%, and Palo Alto Networks slid 3%. The Global X Cybersecurity ETF hit its lowest level since November 2023. The trigger was not earnings. It was not regulation. It was an AI product in research preview. So, what is Claude Code Security, and why did it rattle global cybersecurity stocks? Anthropic's Claude Code Security is an AI cybersecurity tool built on Claude that scans source code, detects vulnerabilities, and recommends fixes using advanced reasoning models. Investors reacted instantly because this directly targets the code scanning and application security layer -- a core revenue stream for many cybersecurity vendors. In simple terms, markets fear AI could automate what humans and legacy software tools currently do. It operates more like a high-level security researcher than a software scanner. The tool maps data flows across thousands of files, identifies complex business logic flaws, and suggests targeted software patches for human review. This shifts the focus from merely "flagging" a problem to providing a ready-to-deploy solution, drastically reducing the Mean Time to Remediate (MTTR) for developers. This reaction reflects a broader theme shaping 2026 markets -- artificial intelligence disrupting established enterprise software models faster than expected. Claude Code Security is an AI-driven code security and vulnerability detection system integrated into Anthropic's Claude Code platform. It is currently available as a limited research preview for Enterprise and Team customers, with open-source maintainers eligible for expedited access. Unlike traditional static analysis tools that flag known weaknesses such as exposed API keys or outdated encryption libraries, Claude Code Security analyzes software more holistically. It tracks how data flows across systems. It examines how components interact. It detects complex, context-dependent vulnerabilities. According to Anthropic, its latest model -- Claude Opus 4.6 -- identified more than 500 previously undiscovered vulnerabilities in production open-source codebases. Some flaws had reportedly remained hidden for years despite expert audits. The data released by Anthropic is hard for Wall Street to ignore. During internal testing, Claude 4.6 Opus outperformed every existing benchmark for vulnerability detection, even spotting memory corruption issues in critical utilities like GhostScript and OpenSC. This level of precision suggests that AI agents can now perform "Zero-Day" research at an industrial scale. That claim is what captured investor attention. Claude Code Security uses a multi-stage verification process. The AI effectively challenges its own findings before reporting them. Each vulnerability receives: Importantly, the system does not automatically deploy fixes. Developers must review and approve suggested patches. This human-in-the-loop model positions the tool as a code auditing assistant rather than a fully autonomous security system. However, its ability to scan entire codebases quickly and uncover deep logic flaws differentiates it from rule-based scanners. Traditional static analysis relies on signature detection and predefined patterns. Claude's approach is contextual and AI-driven. That shift matters. The market reaction was immediate. Investors interpreted the launch as a signal that AI-powered code security tools could challenge existing cybersecurity vendors. Traditional vendors charge enterprise-grade fees for Static Application Security Testing (SAST). Anthropic is essentially commoditizing this by bundling advanced security checks into a general-purpose AI coding assistant. If a developer can secure their code while writing it for a fraction of the cost, the demand for standalone security platforms may dwindle. As "vibe coding" -- the practice of using AI to generate entire applications via natural language -- becomes the industry standard, security must be built-in at the point of creation. Investors are betting that AI-native security will replace the "bolted-on" security models of the last decade. However, analysts argue that Claude Code Security does not directly compete with real-time intrusion prevention, endpoint detection, or identity management platforms -- core revenue drivers for companies like CrowdStrike and Okta. Instead, it focuses on secure coding and vulnerability detection at the development stage. So why the sell-off? Three factors stand out: First, investor sensitivity to AI disruption remains high after months of volatility in software stocks. Second, generative AI tools are expanding rapidly into areas once thought safe from automation. Third, valuation multiples in cybersecurity remain elevated, leaving little room for uncertainty. Some analysts described the move as narrative-driven panic rather than fundamentals-based repricing. At this stage, Claude Code Security is in research preview. There is no disclosed revenue impact. It does not replace security operations centers. It does not manage identities. It does not stop ransomware in real time. However, the broader concern is long-term. If AI models can reliably audit large codebases, detect complex vulnerabilities, and recommend fixes faster than human teams, software development workflows could change dramatically. That could reduce demand for certain traditional scanning tools. It could also pressure pricing models in application security testing. On the other hand, many experts believe AI will enhance cybersecurity rather than replace it. AI-driven security tools may become embedded across enterprise platforms, benefiting both startups and incumbents. In other words, disruption and opportunity may arrive together. The sharp drop in cybersecurity stocks reflects something larger than one product launch. Markets in 2026 are highly sensitive to AI innovation. Each new capability from leading AI firms raises questions about which sectors might face disruption next. Anthropic's Claude Code Security signals that artificial intelligence is moving beyond chatbots and content generation into core enterprise infrastructure. That expansion increases competitive pressure. It also increases uncertainty. For now, the tool remains in preview. Revenue impact is unclear. But the reaction underscores how quickly investor sentiment can shift when AI enters a new domain. Claude Code Security has not yet replaced cybersecurity firms. It has not reported revenue. It has not automated patch deployment. But it has demonstrated that AI-powered code security tools can scan entire codebases, identify hidden vulnerabilities, and prioritize risk at scale. That capability alone was enough to shake markets. The sell-off may prove temporary. Or it may mark the early stage of structural change in cybersecurity. What is certain is this: AI-driven vulnerability detection is no longer theoretical. It is live, it is evolving, and investors are watching closely. And in today's AI-sensitive market, even a research preview can move billions. 1. Why did Claude Code Security cause cybersecurity stocks to crash? Cybersecurity stocks fell as much as 11% in a single trading session on February 23, 2026, immediately after Anthropic unveiled Claude Code Security. CrowdStrike, Datadog and Zscaler led the declines, while the Global X Cybersecurity ETF hit its lowest level since November 2023. Investors reacted to fears that AI-powered code security tools could disrupt traditional vulnerability scanning and application security markets. The sell-off reflected valuation risk and broader anxiety about AI reshaping enterprise software revenue models. 2. Does Claude Code Security replace traditional cybersecurity tools? More than 500 previously undiscovered vulnerabilities were identified by Claude Opus 4.6 in production open-source codebases, according to Anthropic. However, the tool focuses on code auditing and vulnerability detection, not real-time threat detection, endpoint protection, or identity management. It does not block ransomware or manage live attacks. Instead, it scans codebases and recommends patches for human approval. That limits direct overlap with core products from major cybersecurity firms. 3. Is Claude Code Security a real threat to cybersecurity companies' revenue? Zero revenue impact has been disclosed so far, as the product remains in research preview for Enterprise and Team users. While the technology signals long-term competitive pressure in secure software development and static analysis markets, it does not currently replace intrusion prevention platforms or cloud security systems. The immediate financial threat appears limited, but long-term margin pressure in application security could emerge if AI-driven vulnerability detection scales rapidly. 4. How accurate is Claude Code Security at detecting software vulnerabilities? Anthropic claims its latest model detected over 500 hidden vulnerabilities that had gone unnoticed for years despite expert review. Each finding receives a severity rating and confidence score after a multi-stage verification process. No fixes are applied automatically; developers must approve changes. While independent benchmarking data is still limited, the scale of detections suggests AI-assisted code security is advancing quickly and may outperform rule-based static analysis in complex environments. (You can now subscribe to our Economic Times WhatsApp channel)
[13]
CrowdStrike Stock Extends Losses As Anthropic AI Tool Roils Cybersecurity - CrowdStrike Holdings (NASDAQ:CRWD)
Crowdstrike Holdings Inc (NASDAQ:CRWD) shares are trading lower Monday morning, extending Friday's sell-off after Anthropic introduced Claude Code Security, a new AI tool that scans codebases for vulnerabilities and proposes fixes for developer review. The limited research preview aims to help security teams uncover subtle, high-severity bugs that traditional rule-based scanners often miss. Here's what investors need to know. CrowdStrike Holdings shares are sliding. What's weighing on CRWD shares? Cybersecurity Peers Reevaluate AI Competition Anthropic's Friday launch triggered a pullback across cybersecurity names as investors weighed whether AI-first platforms could eventually displace parts of the software security stack now dominated by CrowdStrike and its peers. Shares of CrowdStrike and rivals including Palo Alto, Cloudflare, GitLab and Okta are all trading lower Monday morning, underscoring fears that AI-native tools from Anthropic and OpenAI could pressure growth and pricing power. Claude Code Security runs on Anthropic's Opus 4.6 model and has already been used to identify hundreds of previously undiscovered vulnerabilities in open-source projects. Anthropic positions the product as a defensive capability, arguing that security teams must match attackers' growing use of AI by finding and patching weaknesses earlier in the development cycle. For CrowdStrike investors, the key question is how quickly offerings like Claude Code Security and rival tools will be adopted, and whether they complement or cannibalize spending on established platforms. Until there's more clarity on that competitive dynamic, CRWD shares may remain sensitive to fresh AI-driven headlines. CRWD Shares Slide Below Key Averages Currently, CrowdStrike is trading 16.8% below its 20-day simple moving average (SMA) and 27% below its 100-day SMA, indicating a bearish trend. Shares have decreased by 9.68% over the past 12 months and are positioned closer to their 52-week lows than highs, suggesting ongoing weakness in the stock. The RSI is at 35.43, which is considered neutral territory, while the MACD shows a value of -14.4602, with the signal line at -14.7813, indicating a bullish crossover as the MACD is above the signal line. This combination suggests that while the stock is under pressure, there may be underlying strength that could support a rebound. Key Resistance: $392.50 Key Support: $304.00 Earnings Countdown Ahead For CrowdStrike The countdown is on: CrowdStrike is set to report earnings on March 3. EPS Estimate: 74 cents (Down from $1.03) Revenue Estimate: $1.30 billion (Up from $1.06 billion) Valuation: Forward P/E of 480.2x Analyst Consensus & Recent Actions: The stock carries a Buy Rating with an average price target of $560.09. Recent analyst moves include: Stifel: Buy (Lowers Target to $480.00) (Feb. 23) Barclays: Overweight (Lowers Target to $550.00) (Feb. 20) Truist Securities: Buy (Lowers Target to $550.00) (Feb. 17) Benzinga Edge Rankings Benzinga Edge rankings show CrowdStrike with a Momentum score of 19.99, while its Short-, Medium- and Long-term price trends are all marked negative. CRWD Shares Slide Monday Morning CRWD Price Action: CrowdStrike Holdings shares were down 9.83% at $350.41 at the time of publication on Monday, according to Benzinga Pro data. Image: Shutterstock.com This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[14]
5 Things To Know On Anthropic's Claude Code Security
The AI platform said Friday it's adding vulnerability scanning capabilities into its web-based Claude Code tool in a move to compete with application security vendors. Anthropic announced Friday it is looking to compete with application security vendors by adding vulnerability scanning capabilities into its web-based Claude Code tool. The move is the latest by Anthropic to add LLM-powered functionality that will rival established software makers and part of a wider trend that has shaken investor confidence in the software industry as a whole. [Related: Palo Alto Networks CEO: AI Won't Replace Security Tools 'Any Time Soon'] Share prices for multiple major security vendors fell Friday after the Anthropic announcement, notably during a day that saw the broader stock market indices rise. What follows are five things to know about Anthropic's Claude Code Security. Claude Code Security marks the first dedicated security product from Anthropic and -- for now -- it's limited to the sphere of application security. In a blog post, Anthropic disclosed that Claude Code Security will provide codebase scanning for vulnerabilities. The tool will then make suggestions for "targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss," the company said in the post. Claude Code Security is a part of Anthropic's Claude Code offering on the web and is now rolling out in a limited research preview. Anthropic contended that its tool will enable massive security improvements compared with existing automated testing methods such as static analysis. Such methods are usually rule-based and can only compare code with known vulnerabilities, the company said. Claude Code Security, on the other hand, "reads and reasons about your code the way a human security researcher would," Anthropic said. That means the tool can understand "how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss," the company said. The result is that Claude Code Security will be capable of uncovering "more complex vulnerabilities, like flaws in business logic or broken access control," as compared with static analysis methods, Anthropic said. In addition, every finding made by the tool will be put through a "multistage verification process" before it's forwarded to an analyst, the company said. The findings are also given severity ratings to help with prioritization, according to Anthropic. With the codebase scanning capabilities offered by Anthropic poised to accelerate the shift toward making vulnerability discovery easier with AI, it's clear that the trend will undoubtedly benefit attackers as well as defenders going forward. Anthropic admitted as much in its post Friday, writing that "the same capabilities that help defenders find and fix vulnerabilities could help attackers exploit them." While Anthropic will no doubt include guardrails aimed at preventing misuse of its own tools, the dual-use nature of AI scanning suggests that attacks will only intensify going forward as the technology continues to advance. Ultimately, threat actors "will use AI to find exploitable weaknesses faster than ever" going forward, the company said. "But defenders who move quickly can find those same weaknesses, patch them, and reduce the risk of an attack," the company said. "Claude Code Security is one step towards our goal of more secure codebases and a higher security baseline across the industry." Despite the restriction of Anthropic's initial efforts to the application security sphere, investors expressed worries about the broader security industry Friday following the Anthropic announcement. As of this writing, shares in CrowdStrike were down 6.5 percent to $394.50 a share, while Cloudflare's stock price fell 6.2 percent to $180.71 a share. Zscaler's stock dropped 3.1 percent to $163.76 a share, and Palo Alto Networks was down 0.6 percent to $150.14 a share. Earlier this week, Palo Alto Networks CEO Nikesh Arora said that investor fears that AI poses more of a risk than an opportunity for cybersecurity vendors are unfounded, with LLMs unlikely to rival the capabilities of security products in the foreseeable future. During Palo Alto Networks' quarterly call Tuesday, Arora told analysts that while GenAI and AI agents are already proving to be massively helpful for security products and teams, there are clear limitations on what LLMs can do. "I'm still confused why the market is treating AI as a threat" to the cybersecurity industry, he said, while adding that he "can't speak for all of software." LLMs aren't accurate enough to fully replace key segments such as security operations, and many security tools -- including from Palo Alto Networks and its broad platform -- have a major leg up through having access to real-world customer data for training its AI models, Arora noted.
[15]
AI coming for cybersecurity too? Anthropic fear hits another corner of Indian stock market
Cybersecurity stocks in India and the US fell sharply after Anthropic launched Claude Code Security, raising concerns about AI-driven disruption in legacy systems and vulnerability management. Indian counters dropped up to 20%, while global peers also declined, as analysts cautioned that the sell-off may be narrative-driven. Just as investors were beginning to believe that the disruptive impact of artificial intelligence might remain largely confined to the IT services sector, fresh concerns have surfaced about its potential to reshape the cybersecurity industry as well. Tuesday's sharp decline in cybersecurity stocks reflected those fears, as investors assessed the possible impact of a new security tool launched by Anthropic. The company's latest feature, Claude Code Security, is designed to identify high-severity vulnerabilities in open-source software repositories and provide patches to fix the bugs, raising questions about how AI-driven tools could affect traditional cybersecurity service providers. In Tuesday's session, several cybersecurity and technology stocks came under sharp selling pressure, with some falling as much as 20%. TAC Infosec was among the worst hit, hitting a 20% lower circuit at Rs 415.70 on the NSE. TechD Cybersecurity dropped more than 14%, while Sattrix Information Security declined 5%. Exato Technologies slipped 3%. Among other stocks, Sasken Technologies fell up to 3.2% to its day's low of Rs 1,155 per share on the BSE Limited. Quick Heal Technologies declined more than 3%, while Expleo Solutions dropped nearly 5% to its intraday low of Rs 791 per share. In the US, shares of CrowdStrike, Datadog and Zscaler fell around 11%, while those of Fortinet and Okta were down roughly 6%. Palo Alto Networks dropped 3% and SentinelOne was down by 5%. Also Read | Investing Rs 95,000 a month through mutual fund SIPs - Can it build a Rs 5 crore corpus in 15 years? Anthropic said its Claude Code tool can be used to modernise legacy systems that run on COBOL. On Monday, Anthropic said Claude Code could automate much of the exploration and analysis that drives the complexity of COBOL modernisation. Short for Common Business-Oriented Language, COBOL is a dominant programming system developed in the late 1950s and is commonly used in business data processing, including payment processing and retail transaction systems. According to Anthropic, an estimated 95% of ATM transactions in the US still rely on COBOL, making it a potential target for cost-efficient AI disruption. "Hundreds of billions of lines of COBOL run in production every day, powering critical systems in finance, airlines and government. Despite that, the number of people who understand it shrinks every year," Anthropic said in its latest blog post. "What you're seeing today is really the continuation of a panic-driven, narrative-led sell-off," Shrenik Kothari, director, security and infrastructure analyst at Robert W. Baird, told Reuters. Claude Code Security does not handle real-time security tasks such as detecting live intrusions, stopping attacks in progress or managing compiled software components in production, which are capabilities provided by other specialised security platforms, Kothari added. (Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of The Economic Times) (You can now subscribe to our ETMarkets WhatsApp channel)
[16]
Anthropic Debuts Security Tool as Cybersecurity Stocks Fall | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. That's according to a report Friday (Feb. 20) from Bloomberg News, which characterizes this as the latest example of software shares dropping due to worries about competition from artificial intelligence firms. In this case, the release of Anthropic's tool, which "scans codebases for security vulnerabilities and suggests targeted software patches for human review," prefigured a decline in the price of companies such as Cloudflare, the report said. "There's been steady selling in software, and today it's security that's getting a mini-flash crash on a headline," said Dennis Dick, head trader at Triple D Trading. "This kind of market is scary for investors, because things are just moving relentlessly to the downside as soon as you get a hint of disruption. It's rational to be cautious, because people were saying a while ago that the software drop was overdone, and yet it keeps going down." Another AI-related software selloff occurred last summer, affecting the likes of Salesforce and Workday, PYMNTS reported. The Bloomberg report added that much of the selling has accompanied the launch of new AI tools from the likes of Anthropic, Google and OpenAI. Investors worry that the ability to "vibe code," or employ AI to write software code, will let users develop their own applications, meaning less demand for legacy software products. However, as PYMNTS wrote last year, research has shown that vibe coding is not about to replace human software developers anytime sooner. According to that report, researchers have found that agentic AI models such as Claude performed best when developers reviewed outputs after key checkpoints instead of running fully autonomous sessions. "Without those checkpoints, the models produced longer, less maintainable codebases and missed security constraints," PYMNTS added. The findings line up with earlier research on CoAct-1: Computer-Using Agents with Coding as Actions, which also concluded that human interaction remains critical for steering multi-agent software systems toward reliable outcomes. "Vibe coding may indeed spark a new economy, but not through total automation. Its real promise lies in redefining collaboration: Developers who manage, teach and correct AI will shape the next era of software creation," that report said. "In the process, coding may become less about syntax and more about a shared workflow where human oversight remains."
[17]
Why Anthropic's New AI Tool Claude Code Security Is Rattling Cybersecurity Stocks - Global X Cybersecurity ETF (NASDAQ:BUG), CrowdStrike Holdings (NASDAQ:CRWD)
On Friday, Anthropic unveiled Claude Code Security, a new AI‑driven tool designed to autonomously hunt down software vulnerabilities and propose fixes. Unlike traditional scanners that look for known patterns, this capability, embedded into its agentic coding tool for developers, lets the AI analyse full codebases and reason about how different pieces of software interact, flagging subtle and complex flaws that humans and legacy tools often miss, the company said in a blog post. Here's what to know: Built On Advanced AI Reasoning The new tool leverages Anthropic's latest model, Opus 4.6, which has been tested internally by the company's Frontier Red Team. According to Anthropic, the model already identified more than 500 previously unknown high‑severity vulnerabilities in production open‑source projects -- flaws that had gone undetected for years despite extensive human review. Raises Stakes In Cybersecurity Arms Race Anthropic framed Claude Code Security as a defensive tool in a landscape where attackers are increasingly using AI to find and exploit bugs faster than ever. The company is rolling out the capability cautiously in a limited research preview for enterprise and team customers, and offering accelerated access to open‑source maintainers who lack resources for robust security testing. The announcement sparked a notable sell‑off in cybersecurity stocks on Friday, as investors weighed the implications of AI moving deeper into a domain long dominated by specialist security vendors. Still Needs Human Intervention Claude Code Security reviews full codebases, tracing how data flows through systems and inspecting interactions across components. It also rates the severity of issues it finds and generates human‑readable explanations and suggested patches. However, it does not auto‑apply fixes. All changes must be reviewed and approved by developers, a safeguard meant to prevent unintended consequences. Competitive Pressure Heats Up AI competition in security isn't limited to Anthropic. OpenAI has debuted its own automated cybersecurity tools, starting with "Aardvark" in October last year. Both companies could bring these AI tools directly into the software development process, where traditional security companies like CrowdStrike and Palo Alto Networks have long been the leaders. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
[18]
4 Vijay Kedia stocks crash up to 20% on fears of AI-led disruption
Shares of four companies backed by ace investor Vijay Kedia tumbled up to 20% on Tuesday after investors reacted to AI startup Anthropic's new security tool, Claude Code Security, which detects and fixes vulnerabilities in open-source software. TAC Infosec, where Kedia holds a 9.58% stake, hit a 20% lower circuit, while TechD Cybersecurity fell over 14%. Shares of four companies backed by ace investor Vijay Kedia came under sharp selling pressure on Tuesday, with stocks plunging as much as 20%, as investors weighed the potential impact of artificial intelligence startup Anthropic's new security tool on the industry. Anthropic's new feature, Claude Code Security, is designed to detect high-severity vulnerabilities in open-source software repositories and provide patches to fix bugs. Among the worst hit was TAC Infosec, where Kedia holds a 9.58% stake, or over 20 lakh shares. The stock hit a 20% lower circuit at Rs 415.70 on the National Stock Exchange of India. TechD Cybersecurity dropped more than 14%, where Kedia owns a 5.26% stake, or nearly 3.93 lakh shares. Sattrix Information Security fell 5%, with Kedia holding an 8.06% stake, equivalent to 9.14 lakh shares, while Exato Technologies slipped 3%, where he owns a 3.48% stake, or about 3.50 lakh shares. Anthropic's new feature, Claude Code Security, is designed to detect high-severity vulnerabilities in open-source software repositories and offer patches to fix bugs. As per the latest shareholding pattern on the exchanges, Kedia, a market veteran, holds a 9.58% stake or over 20 lakh shares of the company. In the US, shares of CrowdStrike, Datadog and Zscaler fell around 11%, while those of Fortinet and Okta were down roughly 6%. Palo Alto Networks dropped 3% and SentinelOne was down by 5%. Anthropic said its Claude Code tool can be used to modernize legacy systems that run on COBOL. Anthropic on Monday said Claude Code could automate much of the exploration and analysis that drives the complexity of COBOL modernization -- a key business area for IBM. IBM has long sold mainframe systems optimized for large-scale transaction processing, where COBOL is widely used. Short for Common Business-Oriented Language, COBOL is a dominant programming system developed in the late 1950s and is commonly used in business data processing, including payment processing and retail transaction systems. According to Anthropic, an estimated 95% of ATM transactions in the U.S. still rely on COBOL, making it a potential target for cost-efficient AI disruption. "Hundreds of billions of lines of COBOL run in production every day, powering critical systems in finance, airlines, and government. Despite that, the number of people who understand it shrinks every year," Anthropic said in its latest blog post. (Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of The Economic Times) (You can now subscribe to our ETMarkets WhatsApp channel)
[19]
CrowdStrike, Datadog and other cybersecurity stocks slide after Anthropic's AI tool launch
Shares of cybersecurity companies including CrowdStrike and Datadog slumped on Monday, as investors weighed the potential impact of artificial intelligence startup Anthropic's new security tool on the industry. Anthropic's new feature, Claude Code Security, is designed to detect high-severity vulnerabilities in open-source software repositories and offer patches to fix bugs. Shares of CrowdStrike, Datadog and Zscaler fell around 11%, while those of Fortinet and Okta were down roughly 6%. Palo Alto Networks dropped 3% and SentinelOne was down by 5%. Software stocks have been battered in recent months by market fears around the growing capabilities of AI tools, particularly following the launch of plug-ins from Anthropic's large language model Claude, seen as the startup's push to become an application layer. "What you're seeing today is really the continuation of a panic-driven, narrative-led selloff," said Shrenik Kothari, director, security and infrastructure analyst at Robert W. Baird. Claude Code Security does not handle real-time security tasks such as detecting live intrusions, stopping attacks in progress or managing compiled software components in production, which are capabilities provided by other specialized security platforms, said Kothari. Some analysts have said the selloff is an overreaction, fueled by an overly simplistic narrative that AI would negate the need for existing cybersecurity solutions. Separately, AI chip designer Nvidia said on Monday it has teamed up with Akamai, Forescout, Palo Alto Networks, Xage Security and Siemens to boost real-time cybersecurity for industrial control systems.
[20]
Why AI bolt-ons won't help IT, cybersecurity weather this storm
In less than ten days, Anthropic triggered two separate stock selloffs across enterprise technology. First, its Cowork agent shook IT services stocks. Then, late last week, it launched Claude Code Security, a tool that scans software for weaknesses and suggests fixes. Billions were lost in cybersecurity stocks in a single trading session, with some of the biggest names in the sector falling 8- 9% in one day. Analysts say the next wave of damage is already taking shape. Mid-tier companies in developer tools, code-related security, and software firms across the US and India that simply added AI into existing products are seen as most at risk. "Point security or SaaS players with AI bolt-ons displacing IT and low-code platforms face high risk," said Gaurav Vasu, CEO of UnearthInsight. The numbers back that up. Motilal Oswal Financial Services has modelled a scenario where rapid AI-led pricing pressure over 12-18 months could lead to earnings cuts of around 10% across large-cap IT companies. Markets are already pricing in free cash flow growth of just 6.5% over ten years, well below earlier high-growth cycles. UnearthInsight expects overall sector growth to slow to just 3-4% in FY27, with roughly 1.5 percentage points of that coming from acquisitions rather than real organic demand. Yet the overall market keeps growing. Gartner forecasts worldwide IT spending will reach $6.1 trillion in 2026 and $7.6 trillion by 2029, driven largely by AI investment. The disruption, said Biswajit Maity, senior principal analyst, Gartner, is best understood as a massive restructuring of the technology industry driven by the intelligence supercycle. For companies that cannot keep up, the Accion Labs story may just be the start, where the digital engineering firm was seeking private equity investment of $800 million in September 2025. By February 2026, it was being acquired by PAG for $500-550 million. That is a 30% drop in under five months. "Only those who restructure and show continued high growth will command good valuations," Vasu said. Standalone cybersecurity vendors face pressure from another side, too. Microsoft is bundling more security features into its existing products, with new pricing kicking in from July 2026, experts said. That makes it harder for smaller players to charge enterprise clients separately for tools they can now get as part of a package they already pay for. Also Read: Fear factor: Claude Cowork, techies no work? The reason these companies are at risk is that AI is not just making workers faster, but is breaking the logic of how enterprise technology has been priced and sold for decades. "Traditional pricing models based on labor hours fail to reflect the value delivered to customers. Outcome and value-based models better align provider incentives with client success and are more resilient in an AI-driven environment," said Biswajit Maity, senior principal analyst at Gartner. In IT services, companies have long billed clients by the hour. AI reduces the number of hours needed, which directly shrinks revenue. UnearthInsight estimates that 12-15% of traditional hourly billing faces direct pressure from AI tools automating coding, testing, and routine application work. The shift is already underway on the ground. A recent survey found that 74% of service providers are already using generative AI to lower delivery costs and meet contractual obligations for cost reductions over time, according to Gartner. "Enterprise clients now expect global and Indian IT services partners to pass on productivity gains through pricing or embedded solutions," Vasu said. In software, the pressure is coming from a different direction. Seat-based pricing, where companies pay per user, stops making sense when AI agents can do the work of many human users. Enterprises can also build their own tools more easily now, and smaller software firms can enter markets at much lower cost. "SaaS companies will have to move from seat-based pricing to outcome or output-based models to survive," said Pareekh Jain, CEO of EIIRTrend. Also Read: Explained: What is Anthropic's AI tool that's sparking job loss fears
[21]
CrowdStrike, Datadog and other cybersecurity stocks slide after Anthropic's AI tool launch
Feb 23 (Reuters) - Shares of cybersecurity companies including CrowdStrike and Datadog slumped on Monday, as investors weighed the potential impact of artificial intelligence startup Anthropic's new security tool on the industry. Anthropic's new feature, Claude Code Security, is designed to detect high-severity vulnerabilities in open-source software repositories and offer patches to fix bugs. Shares of CrowdStrike, Datadog and Zscaler fell around 11%, while those of Fortinet and Okta were down roughly 6%. Palo Alto Networks dropped 3% and SentinelOne was down by 5%. Software stocks have been battered in recent months by market fears around the growing capabilities of AI tools, particularly following the launch of plug-ins from Anthropic's large language model Claude, seen as the startup's push to become an application layer. "What you're seeing today is really the continuation of a panic-driven, narrative-led selloff," said Shrenik Kothari, director, security and infrastructure analyst at Robert W. Baird. Claude Code Security does not handle real-time security tasks such as detecting live intrusions, stopping attacks in progress or managing compiled software components in production, which are capabilities provided by other specialized security platforms, said Kothari. Some analysts have said the selloff is an overreaction, fueled by an overly simplistic narrative that AI would negate the need for existing cybersecurity solutions. Separately, AI chip designer Nvidia said on Monday it has teamed up with Akamai, Forescout, Palo Alto Networks, Xage Security and Siemens to boost real-time cybersecurity for industrial control systems. (Reporting by Jaspreet Singh in Bengaluru; Editing by Jonathan Ananda)
[22]
What is Claude Code Security? Why Anthropic's new AI tool has investors worried as cybersecurity stocks crash
Anthropic AI cybersecurity stocks crash today: Cybersecurity stocks experienced another rough day on Monday as investors grew wary of emerging AI-powered tools that could disrupt traditional security business models. Companies like CrowdStrike and Zscaler dropped around 9%, Netskope nearly 10%, while SailPoint fell 6%. Other major names, including Okta, SentinelOne, Fortinet, and Cloudflare, also saw declines. Even ETFs focused on cybersecurity, such as the iShares Cybersecurity & Tech ETF, dropped nearly 4%. Executives pushed back against fears that AI would replace their platforms. CrowdStrike CEO George Kurtz wrote on LinkedIn that while AI innovation is "inspiring," but added that, "Let's stay grounded in reality: an AI capability that scans code does not replace the Falcon platform -- or your security program. Security requires an independent, battle-tested platform built to stop breaches," as quoted by CNBC. Similarly, Palo Alto Networks CEO Nikesh Arora said that he was "confused" why the market viewed AI as a threat to cybersecurity and that customers want more AI to scale their security stack, as quoted by CNBC. Also read: Gold surpasses US dollar as top global reserve asset: Why are central banks dumping dollars and betting big on gold? The launch of Anthropic Claude Code Security contributed to recent stock declines, as investors worry about AI replacing parts of traditional workflows. Bank of America analysts noted that AI tools mainly affect code scanning platforms, such as GitLab and JFrog, but cannot yet replace full end-to-end security platforms. Anthropic recently unveiled Claude Code Security, a new AI capability designed to help teams find and fix software vulnerabilities faster. Unlike traditional tools that rely on rule-based analysis, Claude reads and reasons about code like a human researcher. It can trace data flow, understand component interactions, and detect subtle, context-dependent vulnerabilities that standard scanners often miss. Claude's findings go through multiple verification stages, with each issue assigned severity and confidence ratings. Suggested patches are presented to developers, but nothing is applied automatically, human review remains central. This ensures teams maintain control while leveraging AI to tackle backlogs of vulnerabilities more efficiently. Also read: Quote of the day by Jack Nicholson: 'I don't think many people have a very good understanding of leisure and the importance...' - lessons on work-life balance and why resting is essential for mental health by the Oscar-winning actor Claude Code Security builds on over a year of research into cybersecurity. Anthropic's Frontier Red Team tested Claude in competitive Capture-the-Flag events and collaborated with the Pacific Northwest National Laboratory on AI defenses for critical infrastructure. Anthropic said that, using Claude Opus 4.6, the team uncovered more than 500 vulnerabilities in production open-source codebases, bugs that had gone unnoticed for decades. What is Claude Code Security? Claude Code Security is an AI tool from Anthropic that scans software code for vulnerabilities and suggests fixes, helping developers spot issues faster than traditional methods. Which platforms are most impacted by Claude? AI tools mainly affect code scanning platforms, such as GitLab and JFrog, but full end-to-end cybersecurity platforms remain essential.
[23]
Cybersecurity stocks hit sharply by Anthropic 'Claude Code Security'
Shares of companies in the cyber security solutions space have nosedived after America-based AI company Anthropic PBC unveiled a tool that is capable of autonomously finding and patching possible software vulnerabilities. Shares of companies in the cyber security solutions space have nosedived after an American-based AI company, Anthropic PBC, unveiled a tool that is capable of autonomously finding and patching possible software vulnerabilities. At the time of filing this report, JFrog shares were 24 per cent down, CrowdStrike Holdings' shares were 8 per cent down, Okta was over 9% down, and GitLab was over 8 per cent. Similarly, shares of Zscaler, Rubrik Inc, and Palo Alto Networks also dropped sharply, data showed. Anthropic's Claude Code Security is a new capability built into Claude Code on the web. The company said it is now available in a limited research preview. "It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss," the company said in a statement. Claude Code Security is intended to put solutions squarely in the hands of defenders and protect code against this what it said new category of AI-enabled attack. "We're releasing it as a limited research preview to Enterprise and Team customers, with expedited access for maintainers of open-source repositories, so we can work together to refine its capabilities and ensure it is deployed responsibly," their statement added. Ever since the emergence of Artificial Intelligence, many have been apprehensive that AI-integrated companies may emerge as competitors. There's growing investor angst over generative AI software coding tools and automated AI assistants, and how they might impact traditional software companies -- their broader business outlook and, in particular, their profit margins.
[24]
Claude Code Security explained: How it caused a cyber stock crash
On February 20, 2026, the cybersecurity sector experienced a "mini-flash crash" that wiped out over $15 billion in market value in a single day. The catalyst was a product announcement from the AI startup Anthropic: Claude Code Security. While this event is often grouped into the broader "SaaSpocalypse" of 2026, it was a distinct second wave of panic that specifically targeted the defensive moats of the cybersecurity industry. Also read: Anthropic AI: Software companies worried about Claude's growing powers Claude Code Security is a specialized engine built into the Claude Code platform that leverages the advanced reasoning capabilities of the Opus 4.6 model. Unlike traditional security tools that rely on "signatures" or pre-defined rules to find known bugs, Claude treats a codebase like a complex narrative. It "reads" the logic of an application to understand intent, allowing it to identify sophisticated architectural flaws and "zero-day" style vulnerabilities that have eluded human researchers for years. By the time of its launch, the tool had already autonomously identified and suggested fixes for over 500 high-severity vulnerabilities in major open-source projects, many of which had remained undetected for decades. The feature goes beyond simple detection by offering end-to-end autonomous remediation. When Claude identifies a security flaw, it doesn't just alert a developer; it generates a precise, tested software patch and explains the underlying logic of the fix. This capability addresses the most significant bottleneck in the industry: the "cybersecurity talent gap." By automating the triage and patching process, the tool transforms security from a slow, manual review process into a real-time, integrated part of the development workflow. This "self-healing" code capability suggests a future where software can be secured as fast as it is written, potentially removing the need for many third-party monitoring services. Also read: M.A.N.A.V. Vision Explained: Why India is betting big on it It is important to distinguish this event from the "General SaaSpocalypse" that occurred earlier in the month. On February 4, 2026, the launch of Claude Cowork triggered a massive sell-off in general software and IT services (like Salesforce and Infosys) because it threatened the "per-seat" licensing model. However, the February 20 crash was a surgical strike on Cybersecurity stocks. Investors panicked specifically because Claude Code Security proved that AI could "reason" through security problems, threatening to commoditize the high-margin business models of giants like JFrog (-25%), Okta (-9%), and CrowdStrike (-8%). While the first crash was about the death of software seats, the second was about the potential death of the third-party security tax.
Share
Share
Copy Link
Anthropic launched Claude Code Security, an AI-powered tool that scans codebases for security vulnerabilities and suggests software patches. The announcement triggered a sharp selloff in cybersecurity stocks, with CrowdStrike falling nearly 8% and the Global X Cybersecurity ETF hitting a two-year low. The market reaction reflects growing investor anxiety about AI security tools displacing traditional cybersecurity business models.
Anthropic unveiled Claude Code Security on Friday, introducing a capability that scans codebases for security vulnerabilities and suggests targeted software patches for human review
1
. Unlike traditional security software that relies on rule-based pattern matching, the tool reasons through code like a security researcher, according to the company2
. The AI security tools assign severity ratings to identified issues and include confidence rankings indicating how certain the system is in its assessment of each risk2
. Available as a limited research preview for Enterprise and Team customers, the tool also offers free expedited access for open-source maintainers.
Source: Fortune
Anthropic emphasized that nothing is applied without human approval, with Claude Code Security identifying problems and suggesting solutions while developers always make the final call
3
. The company claims the tool is context-aware, reading and reasoning about code the way a human security researcher would by understanding how components interact, tracing how data moves through applications, and catching complex vulnerabilities that rule-based tools miss3
.The market reaction was swift and severe. CrowdStrike dropped nearly 8% in Friday's session, while Cloudflare fell just over 8%
2
. Zscaler declined 3.5%, SailPoint shed 6.8%, and Okta fell 5.7%. The Global X Cybersecurity ETF tumbled as much as 3.8%, extending its year-to-date losses to 14% and closing at its lowest point in over two years1
. JFrog experienced the most dramatic decline, dropping nearly a quarter of its value in a single session1
.
Source: ET
The selloff continued into Monday, with CrowdStrike and Zscaler each dropping about 9%, while Netskope declined nearly 10%
5
. SailPoint fell 6%, while Okta, SentinelOne, and Fortinet lost more than 4% each5
. Investor anxiety centered on concerns that AI-powered code security tools could replace tasks handled by traditional cybersecurity companies, threatening their longstanding business models.CrowdStrike CEO George Kurtz defended his company's position in a LinkedIn post over the weekend, arguing that an AI capability that scans code does not replace the Falcon platform or comprehensive security programs
5
. He emphasized that security requires an independent, battle-tested platform built to stop breaches3
. Kurtz reportedly asked Claude if its new security tool could replace what CrowdStrike does, with Claude responding no3
.
Source: Benzinga
Analysts note that the market reaction was both accurate and overreaching. The 24% hit to JFrog was the clearest signal because its value proposition depends on specialized software supply chain controls that autonomous AI agents now directly threaten
1
. However, the declines for companies like CrowdStrike, Okta, and Zscaler were largely sentiment contagion, as CrowdStrike does not analyze code, Okta does not repair injection flaws, and their core functions differ from application security1
.Related Stories
Anthropic's announcement represents the latest move in a competitive race among AI companies. Google moved first with CodeMender, a system that blends Gemini reasoning with traditional program analysis techniques to automate patch creation
1
3
. OpenAI followed with Aardvark, an agentic security system based on GPT-5 that embeds semantic analysis directly into the CI pipeline and began beta testing in October1
3
.Anthropic delivered what analysts call the most consequential shift by bundling Claude Code Security into an existing licensing path, albeit as a research feature for now
1
. Earlier this month, Anthropic claimed that Claude Opus 4.6 found and validated more than 500 high-severity vulnerabilities in open source code3
. The category that took real damage is the one most perceived to be dependent on pattern matching: SAST, SCA, and ASPM1
.The same day as Anthropic's launch, the Financial Times reported that Amazon's internal AI coding tool, Kiro, caused a thirteen-hour outage by deleting and recreating a production environment
1
. While Amazon blamed user error due to excessive permissions, the reality is that an autonomous agent made an irreversible choice no reasonable human developer would have made1
. This incident highlights the tension every enterprise will face as autonomy creates value but also introduces new attack vectors.Security experts remain cautiously optimistic. Glenn Weinstein, CEO of Cloudsmith, stated that anything helping developers write better, safer code is beneficial, while Isaac Evans, CEO of Semgrep, expressed excitement for Claude Code Security despite not having tried it yet
3
. However, Evans noted that the real test will be performance at scale, pointing out that foundation model companies have not published detailed statistics on false positives or the cost to achieve their results3
. The broader software sector faces similar pressures, with the iShares Expanded Tech-Software Sector ETF down more than 23% this year, on track for its biggest quarterly percentage drop since the financial crisis in 2008.Summarized by
Navi
[3]
24 Feb 2026•Business and Economy

03 Feb 2026•Technology

03 Feb 2026•Business and Economy

1
Technology

2
Technology

3
Business and Economy
