Photo by Kevin Ku: Pexels
In the ongoing debate about the future of software development, we often hear that AI will replace us. As a Senior JS Dev, I firmly state: Artificial Intelligence will not fully replace human oversight in the code review process. Our role—understanding business context, architectural subtleties, and risk assessment—remains irreplaceable.
I view AI as a Skill ENHANCER. It is not a substitute, but a sophisticated assistant that significantly improves the quality and speed of the feedback loop we receive. It makes it easier for us to detect anomalies in our own code and in the contributions of the entire team, and it is an invaluable source of practical learning.
I know that you can use MCP protocols or Claude Code commands. But I deliberately keep it simpler—so that any beginner can try it, understand the mechanism, and adapt it to their needs without additional configuration.
Below, I show you how I configured a local, two-phase AI mechanism for the automated verification of code changes.
1. Environment Isolation: Clean Code, Clean Review
The foundation of effective Code Review is working on a fresh, undisturbed version of the code. I believe that conducting the review in the same place where you are intensely coding can lead to context contamination.
Therefore, I placed the verification process in an isolated clone of the repository. This approach ensures that my AI assistant has access to the latest, clean code version for comparison. In my development cycle, this is always the final validation stage before pushing changes for a Merge.
2. Configuration Architecture: My .ai Folder
We need a place to store the precise instructions for our model. I created a dedicated .ai directory in the root of the review repository.
Crucial Rule: This directory must be excluded from version control. Adding it to the .gitignore file guarantees that my instructions (I call them Prompt Books) remain private and local.
# .gitignore
.ai/ Inside the folder are two instruction files: review_thorough.md and review_fast.md. These are two configurations dedicated to different levels of analysis depth.
3. Advanced Methodology: The Two-Phase Verification Mechanism
The greatest added value comes from the two-stage process, which aims to eliminate False Positives—the biggest headache of single models.
In the review_thorough.md instruction file, I require the AI to execute both phases:
Phase 1: The Critic Agent (Intensive Analysis)
The first model performs an intensive, critical review. Its task is to ruthlessly and meticulously detect every potential error, standard violation, or architectural ambiguity.
Phase 2: The Verifier Agent (Noise Reduction)
The second model is launched without the context of the findings from Phase 1. It acts as an anti-noise filter. Its sole role is to re-evaluate the problems found by the Critic and precisely reject anything that looks like an error but is not genuinely one.
Only after combining the results from both phases do I receive a highly reliable and actionable list of findings.
4. Precise Scope: We Review Only Diffs
To ensure the feedback I receive is substantial and relates exclusively to my work, the AI does not review the entire feature. In the instructions, I define that it should only analyze the difference (diff) between my current working branch and the base branch (develop or main).
This is crucial for focusing on the developer's contribution. The executive instruction for the AI includes this scope directive:
[Execution]
execute both phases
[Prompt]
run git diff develop to get changes, read each changed file completely, perform primary review, verify each finding, write final review, and notify when complete with summary. 5. Report and Execution – CLI Efficiency
The final result of the process is a concise, structured report that is easy to digest and immediately usable when creating a Merge Request.
What I Get in the Report:
- Summary of Changes: A brief description of the context, making it easier to understand what has changed.
- Error Categorization: Findings by risk level (Critical, High, Medium, Small).
- Project Risk Assessment.
- Recommendations: Suggested solutions along with technical comments.
Technical Note: The entire report structure (summary, error categories, risk assessment) is defined directly within the MD file (Prompt Book), allowing me to precisely control what should be returned by the AI model.
Rapid Execution via Aliases
The process is executed locally using a CLI tool integrated with the AI model.
The base command is as follows:
cat .ai/commands/review_thorough.md | claude --print Finally, to avoid having to remember complex commands, I set up aliases that let me run the review without digging through the readme:
# Aliases in .zshrc or .bashrc
alias reviewfull='cat .ai/commands/review_thorough.md | claude --print'
alias reviewquick='cat .ai/commands/review_fast.md | claude --print' Thanks to this, in any project where I have this configured, I can instantly invoke a full, two-phase review using the simple command reviewfull.



