With not much more effort you can get a much better review by additionally concatenating the touched files and sending them as context along with the diff. It was the work of about five minutes to make the scaffolding of a very basic bot that does this, and then somewhat more time iterating on the prompt. By the way, I find it's seriously worth sucking up the extra ~four minutes of delay and going up to GPT-5 high rather than using a dumber model; I suspect xhigh is worth the ~5x additional bump in runtime on top of high, but at that point you have to start rearchitecting your workflows around it and I haven't solved that problem yet.
(That's if you don't want to go full Codex and have an agent play around with the PR. Personally I find that GPT-5.2 xhigh is incredibly good at analysing diffs-plus-context without tools.)
I've been using gemini-3-flash the last few days and it is quite good, I'm not sure you need the biggest models anymore. I have only switched to pro once or twice the last few days
Depends what you mean by "need", of course, but in my experience the curves aren't bending yet; better model still means better-quality review (although GPT-5.0 high was still a reasonably competent reviewer)!
Hum? I just tell claude to review pr #123 and it uses 'gh' to do everything, including responding to human comments! Feedback from coleagues has been awesome.
Good thing I work on an old C++ code base where it's impossible for AI to go through the millions of lines that all interact horribly in unpredictable ways.
As for PR reviews, assuming you've got linting and static analysis out the way, you'd need to enter a sufficiently reasonable prompt to truly catch problems or surface reviews that match your standard and not generic AI comments.
My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
I would just put a PR_REVIEW.md file in the repo an have a CI agent run it on the diff/repo and decide pass or reject. In this file there are rules the code must be evaluated against. It could be project level policy, you just put your constraints you cannot check by code testing. Of course any constraint that can be a code test, better be a code test.
My experience is you can trust any code that is well tested, human or AI generated. And you cannot trust any code that is not well tested (what I call "vibe tested"). But some constraints need to be in natural language, and for that you need a LLM to review the PRs. This combination of code tests and LLM review should be able to ensure reliable AI coding. If it does not, iterate on your PR rules and on tests.
`gh pr diff num` is an alternative if you have the repo checked out. One can then pipe the output to one's favorite llm CLI and create a shell alias with a default review prompt.
> My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
One way to make them more useful is to ask to list the topN problems found in the change set.
while this approach is useful, i think the diff is too small to catch a lot of bugs.
i use https://www.coderabbit.ai/ and it tends to be aware of files that aren't in the diff, and definitely can see the rest of the file your are editing (not just the lines in the diff)
I recently started using LLMs to review my code before asking for a more formal review from colleagues. It's actually been surprisingly useful - why waste my colleagues time with small obvious things? But it's also gone much further than that sometimes with deeper reviews points. Even when I don't agree with them it's great having that little bit more food for thought - if anything it helps seed the review
"Diff to master and review the changes. Branch designed to address <problem statement>. Write output to d:\claudeOut in typst (.typ) format."
It'll do the diffs and search both branch and master versions of files.
I prefer reading PDFs than markdown, but it'll default to markdown unprompted if you prefer.
I have almost all my workspaces configured with /add-dir to add d:/claudeOut and d:/claudeIn as general scratch folders for temporary in/out file permissions so it can read/write outside the context of the workspace for things like this.
You might get better results using a better crafted prompt (or code review skill?). In general I find claude code reviews are:
- Overly fussy about null checking everything
- Completely miss on whether the PR has properly distilled the problem down to its essence
- Are good at catching spelling mistakes
- Like to pretend they know if something is well architectured, but doesn't
So it's a bit of a mixed bag, I find it focuses on trivia but it's still useful as a first pass before letting your teammates have to catch that same trivia.
It will absolutely assume too much from naming, so it's kind of a good spot if it's making wrong kind of assumptions about how parts work, to think how to name things more clearly.
e.g. If you write a class called "AddingFactory", it'll go around assuming that's what it does, even if the core of it returns (a, b) -> a*b.
You have to then work hard to get it to properly examine the file and convince itself that it is actually a multiplier.
Obviously real-world examples are more subtle than that, but if you're finding yourself arguing with it, it's worth sometimes considering whether you should rename things.
This one's served fairly well:
"Review this diff - detect top 10 problem-causers, highlight 3 worst - I'm talking bugs with editing,saving etc. (not type errors or other minor aspects) [your diff]". The bit on "editing, saving" would vary based on goal of diff.
We're a Haskell shop, so I usually just say "review the current commit. You're an experienced Haskell programmer and you value readable and obvious code" (because that it is indeed what we value on the team). I'll often ask it to explicitly consider testing, too
Not who you're replying to but working at a small small company, I didn't have anyone to give my code for review to so have used AI to fill in that gap. I usually go with a specific then general pass, where for example if I'm making heavy use of async logic, I'll ask the LLM to pay particular attention to pitfalls that can arise with it.
I have been using Codex as a code review step and it has been magnificent, truly. I don’t like how it writes code, but as a second line of defence I’m getting better code reviews out of it than I’ve ever had from a human.
(That's if you don't want to go full Codex and have an agent play around with the PR. Personally I find that GPT-5.2 xhigh is incredibly good at analysing diffs-plus-context without tools.)
Here are the commits, the tasks were not trivial
https://github.com/hofstadter-io/hof/commits/_next/
Social posts and pretty pictures as I work on my custom copilot replacement
https://bsky.app/profile/verdverm.com
We are sooo gonna get replaced soon...
As for PR reviews, assuming you've got linting and static analysis out the way, you'd need to enter a sufficiently reasonable prompt to truly catch problems or surface reviews that match your standard and not generic AI comments.
My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
My experience is you can trust any code that is well tested, human or AI generated. And you cannot trust any code that is not well tested (what I call "vibe tested"). But some constraints need to be in natural language, and for that you need a LLM to review the PRs. This combination of code tests and LLM review should be able to ensure reliable AI coding. If it does not, iterate on your PR rules and on tests.
> My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
One way to make them more useful is to ask to list the topN problems found in the change set.
You can also append ".patch" and get a more useful output
i use https://www.coderabbit.ai/ and it tends to be aware of files that aren't in the diff, and definitely can see the rest of the file your are editing (not just the lines in the diff)
I didn't see this mentioned, but we've been running bugbot for a while now and it's very good. It catches so many subtle bugs.
"Diff to master and review the changes. Branch designed to address <problem statement>. Write output to d:\claudeOut in typst (.typ) format."
It'll do the diffs and search both branch and master versions of files.
I prefer reading PDFs than markdown, but it'll default to markdown unprompted if you prefer.
I have almost all my workspaces configured with /add-dir to add d:/claudeOut and d:/claudeIn as general scratch folders for temporary in/out file permissions so it can read/write outside the context of the workspace for things like this.
You might get better results using a better crafted prompt (or code review skill?). In general I find claude code reviews are:
So it's a bit of a mixed bag, I find it focuses on trivia but it's still useful as a first pass before letting your teammates have to catch that same trivia.It will absolutely assume too much from naming, so it's kind of a good spot if it's making wrong kind of assumptions about how parts work, to think how to name things more clearly.
e.g. If you write a class called "AddingFactory", it'll go around assuming that's what it does, even if the core of it returns (a, b) -> a*b.
You have to then work hard to get it to properly examine the file and convince itself that it is actually a multiplier.
Obviously real-world examples are more subtle than that, but if you're finding yourself arguing with it, it's worth sometimes considering whether you should rename things.