
Anthropic just launched Claude Code Review. $15-25 per review. Team and Enterprise only. Twenty minutes per review cycle.
Let me get this straight. You're paying Claude to write your code. Then you're paying Claude again to review the code it just wrote for you. Double the tokens, double the bill.
I'm not here to bash Claude Code Review. It's probably a polished product, and it's going to get better. That's what these companies do -- launch at enterprise pricing, get the margins, then bring costs down over time. We're probably six to eighteen months away from this becoming mainstream and cheap.
But here's the thing. I didn't want to wait. And I definitely didn't want to pay $25 per review on an active repo.
So I built my own. It's a GitHub Action and a local agent tool. Open source. Works with Claude and OpenAI. Cost per review? Less than two cents. Sometimes a lot less.
It's not magic. It's orchestration. And that's exactly the point -- you don't need a polished SaaS product to get AI code reviews working today. You need a workflow file, an API key, and some well-crafted prompts.
Let me show you what it does.
The Self-Healing Loop
Here's the demo that made me realize this was worth sharing.
I asked Claude Code to create a simple feature -- a book search tool backed by SQLite. Intentionally sloppy. I wanted to see what the reviewer would catch. Claude Code wrote it, created a branch, pushed a PR.
The GitHub Action triggered automatically. The AI code reviewer ran against the diff and came back with five real findings:
Each finding included the file, the line, a severity level, and a suggested fix. All posted directly as comments on the PR.
Now here's where it gets interesting.
I took that review output and fed it right back to Claude Code. "Read the AI code review findings and fix all five issues." Claude Code read the PR comments, understood each finding, made the fixes, committed, and pushed.
The push triggered the GitHub Action again. Another review cycle. The reviewer found a couple of new issues that the fixes had introduced -- a connection that still wasn't being handled correctly, an exception path that needed work. So I fed it back again.
AI writes. AI reviews. AI fixes. AI reviews again.
This is the self-healing loop. It's not fully autonomous yet -- I'm still the one saying "go fix these" -- but the cycle is remarkably tight. Push, review, fix, push, review. Each iteration catches real issues, and each fix actually addresses them.
Could you automate the whole thing? Sure. Wire up the fix step to trigger automatically on review findings. But honestly, I want a human in that loop. The review findings are suggestions, not mandates. Sometimes the reviewer flags something that's intentional. You want eyes on it.
The point isn't full automation. The point is that the feedback loop is fast, cheap, and useful.
Run It Locally
You don't have to wait for CI. That's one of the things I use most.
Run /review locally as a Claude Code skill and get instant feedback on your current branch. It diffs against your base branch, sends it through the reviewer, and prints the findings right in your terminal.
I use this constantly. Before I push, before I even create a PR, I'll run a quick local review. It catches the dumb stuff -- the connection I forgot to close, the missing error handling, the function that grew too large. Faster than waiting for a CI pipeline, and faster than asking a colleague to context-switch into my code.
It works with both OpenAI and Claude. Swap models with a flag. The cost tracking is built in, so you can see exactly what each review costs. Spoiler: it's cents. Sometimes fractions of a cent.
The prompts are extracted to files -- a system message and a review prompt. That means you can customize them for your codebase. Want the reviewer to focus on security? Edit the prompt. Want it to enforce your team's naming conventions? Edit the prompt. Want it to ignore test files? Edit the prompt. You own the context.
What's in the Repo
Here's what you get out of the box:
No dashboard. No enterprise plan. No per-seat pricing. Just a workflow file and your API key.
How It's Built

The architecture is deliberately simple. That's the whole point -- this isn't a product, it's plumbing.
The GitHub Actions workflow triggers on PR events. When a push hits an open PR, the workflow runs the agent tool that lives under agent_tools/. That's it. One workflow file, one trigger.
The agent tool has a few pieces:
Configuration handles the basics -- max findings per review, max follow-up findings, default model, and the diff generation between your feature branch and the base branch. This is what lets it work both in CI and locally. In CI, it knows the PR branches. Locally, you tell it what to diff against.
Prompt files are where the real customization lives. The system message and review prompt are loaded from external files, not hardcoded. This means you can drop in a system message that describes your architecture, your coding standards, your security requirements -- whatever context makes the review more relevant. This is the advantage over a generic review product. Your reviewer knows YOUR codebase because you told it.
The reviewer takes the diff, sends it through the model with your prompts, and extracts structured JSON findings. Each finding has a file path, line number, severity, description, and suggested fix. Structured output means you can do anything with the results -- post them as PR comments, pipe them into Slack, feed them back to an agent.
The commentator takes those JSON findings and posts them as inline PR comments through the GitHub API. Each comment lands on the exact line where the issue was found.
The key insight: this is just orchestration. Diff generation, prompt construction, model call, response parsing, comment posting. No magic. No proprietary secret sauce. Which means you can swap models, add MCP tools, change the output format, or extend it however you want. Fork it, make it yours.
Getting Started
The repo is open source: https://github.com/DoryZi/ai-code-review-demo
Watch the full walkthrough to see it in action: https://youtube.com/watch?v=NElR29asEIw
Clone it, add your API key, and you're running reviews. The README walks you through setup for both the GitHub Action and the local skill.
I'll be honest -- it's not perfect. The reviewer sometimes flags things that are intentional. It occasionally misses nuance that a human reviewer would catch. The findings can be repetitive across cycles. It's a starting point, not a finished product.
But that's kind of the whole point.
We're in the early days of AI-assisted development tooling. The products will get better. The prices will come down. Six months from now, maybe Claude Code Review is $2 per review and worth every penny. Maybe every IDE has this built in.
But right now? Right now you can set this up in an afternoon, run it for pennies, customize it for your codebase, and own the entire pipeline. No vendor lock-in. No enterprise sales calls. No waiting for someone else's roadmap.
The real power isn't in any particular tool -- it's in understanding the orchestration well enough to build your own. A diff, a prompt, a model call, and some structured output. That's all a code review is.
Don't pay $25. Try it.
Github Repo: https://github.com/DoryZi/ai-code-review-demo
AI Code Review Video : https://youtube.com/watch?v=NElR29asEIw