5 Sources
[1]
Claude Code makes it easy to trigger a code check now with this simple command
Run the /security-review command in the terminal or via GitHub Action. Claude Code became generally available in May, and since then, it has become popular among developers for its coding assistance, available right in the terminal or integrated development environments (IDEs). Now, new features are coming to Claude that make it easier to build safely, too. On Wednesday, Anthropic introduced automated security reviews in Claude Code. They allow developers to more easily identify and fix security concerns, and can be invoked either manually using the new "/security-review" command or automatically via the new GitHub Action for Claude Code. Also: Claude Code's new tool is all about maximizing ROI in your organization - how to try it "On demand or automatically, Claude will review the code that you're working on, the code that you're pushing, or your entire repository, and practically identify vulnerabilities and suggest ways to fix them," said Logan Graham, the engineer behind the new features on the Frontier Red Team at Anthropic, to ZDNET. All developers have to do is invoke the /security-review command in Claude Code, which will trigger the security analysis in the terminal. Anthropic said Claude will then search the codebase, identify common vulnerabilities such as SQL injection risks, insecure data handling, and authentication flaws, and explain the issues found. "We want it to be, and I think we can get there soon if it's not there already, kind of like having the best security engineer or best senior software engineer, over shoulder, helping you do your work, better and securely," added Graham. Also: The best AI for coding in 2025 (including a new winner - and what not to use) After identifying the issues, the user can also ask Claude Code to implement the fixes for each one. This allows developers to catch issues easily by integrating the security reviews before committing the code or before it reaches production. ZDNET's own David Gewirtz, a computer science professor turned AI innovator, found the update helpful, saying, "Adding the security review as a command is good. Otherwise, you'd have to embed it in each query or add it to their system instructions." Pull requests are an essential part of the collaborative development process, but they require extensive manual review before being merged into the main codebase. Now, with the new GitHub Action for Claude Code, developers can have Claude automatically analyze every pull request when it's opened, a step that can otherwise be forgotten or overlooked. Also: Anthropic's free Claude 4 Sonnet aced my coding tests - but its paid Opus model somehow didn't Anthropic said Claude can review code changes for security vulnerabilities, apply customizable rules, and post inline comments with concerns and recommendations for fixes. Anthropic has used GitHub Actions to catch vulnerabilities in its own code before shipping to users, according to the release. Just last week, GitHub Actions identified a remote code execution vulnerability, which was fixed before the pull request was merged. To access the /security-review command, update Claude Code to the latest version and run it in your project directory. Anthropic posted documentation for installing and configuring the GitHub Action.
[2]
Anthropic targets DevSecOps with Claude Code update as AI rivals gear up
With GPT-5 on the horizon and Meta ramping up AI hiring, Anthropic's new security-focused features aim to differentiate Claude in the increasingly crowded GenAI coding space. Anthropic has introduced automated security reviews in its Claude Code product, aiming to help developers identify and fix vulnerabilities earlier in the software development process. The update includes a GitHub Actions integration and a new "/security-review" command, allowing developers to prompt Claude to scan code for security issues and recommend fixes. The launch follows Anthropic's release of Claude Opus 4.1, its most advanced AI model to date, which the company says offers major improvements in handling coding tasks.
[3]
Anthropic ships automated security reviews for Claude Code as AI-generated vulnerabilities surge
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic launched automated security review capabilities for its Claude Code platform on Wednesday, introducing tools that can scan code for vulnerabilities and suggest fixes as artificial intelligence dramatically accelerates software development across the industry. The new features arrive as companies increasingly rely on AI to write code faster than ever before, raising critical questions about whether security practices can keep pace with the velocity of AI-assisted development. Anthropic's solution embeds security analysis directly into developers' workflows through a simple terminal command and automated GitHub reviews. "People love Claude Code, they love using models to write code, and these models are already extremely good and getting better," said Logan Graham, a member of Anthropic's frontier red team who led development of the security features, in an interview with VentureBeat. "It seems really possible that in the next couple of years, we are going to 10x, 100x, 1000x the amount of code that gets written in the world. The only way to keep up is by using models themselves to figure out how to make it secure." The announcement comes just one day after Anthropic released Claude Opus 4.1, an upgraded version of its most powerful AI model that shows significant improvements in coding tasks. The timing underscores an intensifying competition between AI companies, with OpenAI expected to announce GPT-5 imminently and Meta aggressively poaching talent with reported $100 million signing bonuses. Why AI code generation is creating a massive security problem The security tools address a growing concern in the software industry: as AI models become more capable at writing code, the volume of code being produced is exploding, but traditional security review processes haven't scaled to match. Currently, security reviews rely on human engineers who manually examine code for vulnerabilities -- a process that can't keep pace with AI-generated output. Anthropic's approach uses AI to solve the problem AI created. The company has developed two complementary tools that leverage Claude's capabilities to automatically identify common vulnerabilities including SQL injection risks, cross-site scripting vulnerabilities, authentication flaws, and insecure data handling. The first tool is a command that developers can run from their terminal to scan code before committing it. "It's literally 10 keystrokes, and then it'll set off a Claude agent to review the code that you're writing or your repository," Graham explained. The system analyzes code and returns high-confidence vulnerability assessments along with suggested fixes. The second component is a GitHub Action that automatically triggers security reviews when developers submit pull requests. The system posts inline comments on code with security concerns and recommendations, ensuring every code change receives a baseline security review before reaching production. How Anthropic tested the security scanner on its own vulnerable code Anthropic has been testing these tools internally on its own codebase, including Claude Code itself, providing real-world validation of their effectiveness. The company shared specific examples of vulnerabilities the system caught before they reached production. In one case, engineers built a feature for an internal tool that started a local HTTP server intended for local connections only. The GitHub Action identified a remote code execution vulnerability exploitable through DNS rebinding attacks, which was fixed before the code was merged. Another example involved a proxy system designed to manage internal credentials securely. The automated review flagged that the proxy was vulnerable to Server-Side Request Forgery (SSRF) attacks, prompting an immediate fix. "We were using it, and it was already finding vulnerabilities and flaws and suggesting how to fix them in things before they hit production for us," Graham said. "We thought, hey, this is so useful that we decided to release it publicly as well." Small development teams get enterprise-grade security tools for free Beyond addressing the scale challenges facing large enterprises, the tools could democratize sophisticated security practices for smaller development teams that lack dedicated security personnel. "One of the things that makes me most excited is that this means security review can be kind of easily democratized to even the smallest teams, and those small teams can be pushing a lot of code that they will have more and more faith in," Graham said. The system is designed to be immediately accessible. According to Graham, developers can start using the security review feature within seconds of the release, requiring just about 15 keystrokes to launch. The tools integrate seamlessly with existing workflows, processing code locally through the same Claude API that powers other Claude Code features. Inside the AI architecture that scans millions of lines of code The security review system works by invoking Claude through an "agentic loop" that analyzes code systematically. According to Anthropic, Claude Code uses tool calls to explore large codebases, starting by understanding changes made in a pull request and then proactively exploring the broader codebase to understand context, security invariants, and potential risks. Enterprise customers can customize the security rules to match their specific policies. The system is built on Claude Code's extensible architecture, allowing teams to modify existing security prompts or create entirely new scanning commands through simple markdown documents. "You can take a look at the slash commands, because a lot of times slash commands are run via actually just a very simple Claude.md doc," Graham explained. "It's really simple for you to write your own as well." The $100 million talent war reshaping AI security development The security announcement comes amid a broader industry reckoning with AI safety and responsible deployment. Recent research from Anthropic has explored techniques for preventing AI models from developing harmful behaviors, including a controversial "vaccination" approach that exposes models to undesirable traits during training to build resilience. The timing also reflects the intense competition in the AI space. Anthropic released Claude Opus 4.1 on Tuesday, with the company claiming significant improvements in software engineering tasks -- scoring 74.5% on the SWE-Bench Verified coding evaluation, compared to 72.5% for the previous Claude Opus 4 model. Meanwhile, Meta has been aggressively recruiting AI talent with massive signing bonuses, though Anthropic CEO Dario Amodei recently stated that many of his employees have turned down these offers. The company maintains an 80% retention rate for employees hired over the last two years, compared to 67% at OpenAI and 64% at Meta. Government agencies can now buy Claude as enterprise AI adoption accelerates The security features represent part of Anthropic's broader push into enterprise markets. Over the past month, the company has shipped multiple enterprise-focused features for Claude Code, including analytics dashboards for administrators, native Windows support, and multi-directory support. The U.S. government has also endorsed Anthropic's enterprise credentials, adding the company to the General Services Administration's approved vendor list alongside OpenAI and Google, making Claude available for federal agency procurement. Graham emphasized that the security tools are designed to complement, not replace, existing security practices. "There's no one thing that's going to solve the problem. This is just one additional tool," he said. However, he expressed confidence that AI-powered security tools will play an increasingly central role as code generation accelerates. The race to secure AI-generated software before it breaks the internet As AI reshapes software development at an unprecedented pace, Anthropic's security initiative represents a critical recognition that the same technology driving explosive growth in code generation must also be harnessed to keep that code secure. Graham's team, called the frontier red team, focuses on identifying potential risks from advanced AI capabilities and building appropriate defenses. "We have always been extremely committed to measuring the cybersecurity capabilities of models, and I think it's time that defenses should increasingly exist in the world," Graham said. The company is particularly encouraging cybersecurity firms and independent researchers to experiment with creative applications of the technology, with an ambitious goal of using AI to "review and preventatively patch or make more secure all of the most important software that powers the infrastructure in the world." The security features are available immediately to all Claude Code users, with the GitHub Action requiring one-time configuration by development teams. But the bigger question looming over the industry remains: Can AI-powered defenses scale fast enough to match the exponential growth in AI-generated vulnerabilities? For now, at least, the machines are racing to fix what other machines might break.
[4]
Anthropic automates software security reviews with Claude Code - SiliconANGLE
Anthropic automates software security reviews with Claude Code Generative artificial intelligence startup Anthropic PBC today introduced the ability for Claude Code to automate software security reviews, identifying and fixing potential vulnerabilities and weaknesses in code. As code becomes more complex and software engineers embrace "vibe coding," which uses AI as part of the development process, the number of security issues in code has increased. According to Verizon's 2025 Data Breach Investigations Report, there has been a 34% increase in attackers exploiting vulnerabilities to gain initial access compared to last year's report. Developers are increasingly using AI to speed up their workflows and create more complex systems, resulting in a surge of code generation. Security reviews are essential to this process; they involve thoroughly examining the code to ensure it functions correctly with unit tests and does not contain exploits or known vulnerabilities that attackers could potentially exploit. Claude Code is a command-line tool offered by Anthropic that lives in the terminal powered by the company's AI models, which allows developers to automate coding tasks and interact with codebases using natural language. Using GitHub Actions, developers can now easily ask Claude to identify security concerns and then have it fix them. After writing code, a developer can type "/security-review" and Clade Code will begin an ad-hoc security analysis before committing the code. The company's AI model will search the codebase for potential vulnerabilities and provide a detailed explanation of issues it discovers. "This command uses a specialized security-focused prompt that checks for common vulnerability patterns," the company said. The types of potential exploits Claude can uncover include SQL injection risks, cross-site scripting attacks, authentication and authorization flaws, insecure data handling and dependency vulnerabilities. The developer can also ask Clade Code to implement fixes for each issue after they're identified. "This keeps security reviews in your inner development loop, catching issues early when they're easiest to fix," Anthropic said. Going a step further, Claude Code can be initiated automatically when code is moved from development to testing. The AI model scans the code, works to filter out false positives, and posts its comments into tickets about any security concerns it finds, including recommendations and fixes. That way, the development team can review the potential issues and recommendations and follow up. According to Anthropic, this ensures that no code reaches production without at least an automated security review happening. It also integrates easily into automated DevOps continuous integration and continuous deployment pipelines. The system can be tailored to align with the team's security policies and best practices. This customization enables the company to modify its sensitivity, behavior, and collaboration with developers during workflow integration. Other major tech companies have also released their own code agents and assistants, including Google LLC's Code Assist, Amazon.com Inc.'s Q Developer and Microsoft Corp.'s AI-powered code review assistant, which can scan code bases for vulnerabilities at scale and suggest fixes. Many of these systems similarly connect to GitHub, allowing developers to flag potential bugs, improve their code and free up human reviewers to focus on architectural problems. Anthropic added that the company is using the new security review functionality itself to secure code the team ships to production, including for Claude Code itself.
[5]
Anthropic Just Announced New Tools to Make Your Vibe Coding Project Safer
Anthropic has released an update to its vibe coding tool, Claude Code, aimed at helping it identify and address security vulnerabilities. Vibe coding, or the act of using an AI assistant to help develop software and applications, has seen rapid adoption during 2025, and vibe coding platforms, many of which are powered by Anthropic's Claude AI models, have seen their valuations explode as a result. However, some recent headlines have raised concerns about the security of vibe-coded applications. Bitchat, a new messaging app created by Twitter and Block co-founder Jack Dorsey, was found by cybersecurity experts to have some key vulnerabilities in the way it was created. These vulnerabilities made it easy for bad actors to impersonate Bitchat users. Supernetworks CEO and cybersecurity expert Alex Radocea previously told Inc. that security issues in AI-generated apps are "a systemic issue right now." To address these concerns, Anthropic is releasing an update to Claude Code. Users building apps on Claude Code will now be able to easily direct the tool to perform its own security review simply by running the command "/security-review" in their terminal. When the command is entered, according to a blog post from Anthropic, Claude will search your codebase for a variety of potential vulnerabilities. Once these issues have been identified, you can ask Claude to implement fixes.
Share
Copy Link
Anthropic has launched new security features for Claude Code, including automated security reviews and a GitHub Action, to help developers identify and fix vulnerabilities in AI-generated code.
Anthropic, the artificial intelligence company behind Claude, has unveiled a significant update to its Claude Code platform, introducing automated security reviews to address the growing concern of vulnerabilities in AI-generated code 1. This move comes as the AI industry faces increasing pressure to ensure the safety and security of rapidly generated code.
Source: Inc. Magazine
The update introduces two key features:
/security-review Command: Developers can now trigger a security analysis directly from their terminal by typing "/security-review" 1. This command prompts Claude to scan the codebase for potential vulnerabilities, including SQL injection risks, cross-site scripting vulnerabilities, authentication flaws, and insecure data handling 3.
GitHub Action Integration: Anthropic has developed a GitHub Action that automatically initiates security reviews when developers submit pull requests 1. This ensures that every code change receives a baseline security review before reaching production.
The introduction of these features is timely, as the industry grapples with the security implications of AI-accelerated code generation. Logan Graham, an engineer at Anthropic, highlighted the potential for a massive increase in code production, stating, "It seems really possible that in the next couple of years, we are going to 10x, 100x, 1000x the amount of code that gets written in the world" 3.
Source: ZDNet
Anthropic has been testing these tools internally on its own codebase, including Claude Code itself. The company shared examples of vulnerabilities caught before reaching production, such as a remote code execution vulnerability in an internal tool and a Server-Side Request Forgery (SSRF) vulnerability in a proxy system 3.
One of the key benefits of this update is its potential to democratize sophisticated security practices. Graham emphasized that these tools could make security reviews accessible even to the smallest development teams, allowing them to push code with greater confidence 3.
This release comes amid intensifying competition in the AI industry. Anthropic recently launched Claude Opus 4.1, its most advanced AI model, which shows significant improvements in coding tasks 2. Other major tech companies, including Google, Amazon, and Microsoft, have also released their own code agents and assistants with similar security-focused features 4.
Source: VentureBeat
The system is designed to be easily integrated into existing workflows and can be customized to align with specific team security policies and best practices. Enterprise customers can modify existing security prompts or create new scanning commands through simple markdown documents 3.
Summarized by
Navi
[3]
Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.
13 Sources
Technology
23 hrs ago
13 Sources
Technology
23 hrs ago
Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.
14 Sources
Technology
23 hrs ago
14 Sources
Technology
23 hrs ago
Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.
10 Sources
Technology
23 hrs ago
10 Sources
Technology
23 hrs ago