Anthropic has announced the launch of a new AI code review tool called Code Review on the Claude Code platform, designed to automatically identify potential issues before they enter the codebase, thus alleviating efficiency bottlenecks in enterprise development processes caused by the surge in code review demands. This feature was launched on Monday and is currently available as a research preview to Claude for Teams and Claude for Enterprise customers.

With the rise of the "vibe coding" (intuitive coding) model, developers are increasingly using AI tools to generate large volumes of code through natural language, significantly improving development efficiency, but also bringing about issues such as code vulnerabilities, security risks, and reduced readability. Cat Wu, a product leader at Anthropic, said in an interview with TechCrunch that as Claude Code greatly improves code generation efficiency, the number of Pull Requests within enterprises has rapidly increased, making traditional manual review processes difficult to handle, thus becoming a key bottleneck in the software delivery process.
The newly launched Code Review feature integrates with GitHub to automatically analyze Pull Requests submitted by developers and directly mark potential issues and suggestions for fixes within the code. The system focuses on identifying logical errors rather than code style issues to improve the executability of feedback. The AI will gradually explain its reasoning, including where the problem is, its potential impact, and feasible solutions, and it will use color codes to indicate risk levels: red for serious issues, yellow for potential risks that require attention, and purple for issues related to historical errors or existing code structures.
In terms of technical architecture, the system uses a multi-agent collaboration mechanism, where multiple agents work in parallel from different dimensions to review the codebase, and a summarizing agent then consolidates the results, removes duplicates, and determines priority to achieve more efficient automated review processes. Additionally, the tool offers basic security analysis capabilities, allowing enterprises to add custom check rules according to their internal standards. Anthropic's previously released Claude Code Security provides more in-depth security checks.
Due to the high computational costs of the multi-agent architecture, Code Review adopts a token-based billing model, with costs varying based on code complexity. Cat Wu estimates that the average cost per review is between $15 and $25. She stated that this product primarily targets large enterprise customers, such as Uber, Salesforce, and Accenture, which have already been using Claude Code on a large scale and hope to use the automated review tool to handle massive Pull Requests.
This product launch comes at a critical stage in Anthropic's development. On the same day, the company also filed two lawsuits against the U.S. Department of Defense over its designation as a supply chain risk entity. At the same time, its enterprise business is growing rapidly, with the company stating that enterprise subscription users have grown fourfold since the beginning of the year, and annualized revenue from Claude Code has exceeded $2.5 billion since its launch.
As the scale of AI-generated code continues to expand, automated code review is gradually becoming an essential infrastructure in enterprise development processes. Anthropic aims to help enterprises reduce vulnerability risks while increasing development speed through this feature, and to drive AI-assisted software development toward a higher reliability phase.
