GitHub + OpenAI: consistent PR reviews on every pull
PR reviews are where good releases go to die. Not because people don’t care, but because the “quick check” turns into a half-hour context switch, and the same comments get written again and again.
This GitHub PR reviews automation hits engineering leads hardest, but agency owners shipping client work and product teams moving fast feel it too. You get consistent, readable review notes on every pull request, with a clear “Reviewed by AI” label so nothing slips through.
Below you’ll see how the workflow runs inside n8n, what it produces on a real PR, and what you need to roll it out without turning your repo into an experiment.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: GitHub + OpenAI: consistent PR reviews on every pull
flowchart LR
subgraph sg0["PR Flow"]
direction LR
n0@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>PR Trigger"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get file's Diffs from PR"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Create target Prompt from PR.."]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>GitHub Robot"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>Add Label to PR"]
n6@{ icon: "mdi:database", form: "rounded", label: "Code Best Practices", pos: "b", h: 48 }
n7@{ icon: "mdi:robot", form: "rounded", label: "Code Review Agent", pos: "b", h: 48 }
n1 --> n2
n4 --> n5
n7 --> n4
n0 -.-> n7
n6 -.-> n7
n2 --> n3
n3 --> n7
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n1 trigger
class n7 ai
class n0 aiModel
class n6 database
class n2 api
class n3 code
classDef customIcon fill:none,stroke:none
class n1,n2,n3,n4,n5 customIcon
The Problem: PR reviews are inconsistent and slow
Manual PR review is one of those processes that looks fine on paper, then quietly drains your week. Someone opens a pull request, you skim the diff, you leave a few notes, and then you repeat the same “naming, error handling, tests, edge cases” checklist for the tenth time. Meanwhile, the author waits, the branch gets stale, and the next PR piles on. The worst part is the inconsistency. One reviewer cares about logging, another cares about style, and the team learns to treat feedback as subjective noise instead of a reliable bar.
It adds up fast. Here’s where it breaks down in day-to-day work.
- Reviews get delayed because they require deep focus, and deep focus is always booked.
- Common feedback repeats across PRs, so senior engineers spend time retyping rules instead of solving harder problems.
- Important issues slip through when reviewers are tired, rushed, or unfamiliar with a part of the codebase.
- Newer developers get uneven guidance, which slows onboarding and creates “hidden standards” nobody can name.
The Solution: OpenAI reviews every new PR and posts a labeled comment
This workflow turns pull request review into a consistent, repeatable system. It starts the moment a new PR is created in GitHub. n8n pulls the list of changed files and fetches the code diffs, then assembles a review prompt that includes both the changes and your internal best practices. Those best practices can live in a Google Sheet, which is honestly a great place for them because non-developers can update the rules without touching the workflow. An OpenAI-powered agent reads the diff, checks it against your guidelines, and generates clear review notes. Finally, the workflow posts that review directly back to the PR as a comment and applies a visible “Reviewed by AI” label so the team can triage quickly.
The workflow starts on a GitHub pull request trigger. From there it grabs PR diffs, enriches the context using a guidelines sheet, and asks the AI agent to produce review feedback in plain language. At the end, GitHub gets two updates: a comment and a label.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team ships 15 PRs a week. A quick human review is rarely “quick”; even 20 minutes per PR is about 5 hours weekly, and that’s before follow-up comments. With this workflow, the author opens the PR and within a few minutes there’s a clear AI review comment plus the “Reviewed by AI” label. Someone still does the final human sign-off, but they start from a tighter baseline, not a blank page. That’s a few hours back every week, consistently.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- GitHub for PR triggers, comments, and labels
- OpenAI to generate the review feedback
- OpenAI API key (get it from the OpenAI API dashboard)
Skill level: Intermediate. You’ll connect credentials, pick repos, and adjust a prompt, but you won’t need to write application code.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A new pull request kicks everything off. The GitHub trigger watches your repo for PR creation events, so the workflow runs the moment a PR appears.
The workflow collects the actual changes. n8n fetches the PR’s file diffs via an HTTP request, then reshapes that data so the AI sees a clean, readable summary of what changed (not a messy blob).
Your internal standards get pulled in. If you maintain best practices in Google Sheets, the workflow looks them up and adds them to the review context so the feedback matches how you want code written.
The AI generates feedback and GitHub gets updated. The OpenAI chat model powers a “code review assistant” agent that produces structured review notes, which are posted back to the PR as a comment. Then the workflow applies a “Reviewed by AI” label to make the status visible in the PR list.
You can easily modify the guidelines source to use a different doc or database based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Pull Request Trigger
Set up the GitHub trigger so the workflow starts whenever a pull request event occurs in your repository.
- Add and open Pull Request Trigger.
- Set Authentication to
oAuth2. - Select your GitHub Owner and Repository from the list fields.
- Set Events to
pull_request. - Credential Required: Connect your
githubOAuth2Apicredentials.
Step 2: Connect GitHub Data Retrieval
Fetch file diffs from the pull request to provide the content needed for the review prompt.
- Add Fetch PR File Diffs and connect it after Pull Request Trigger.
- Set URL to
=https://api.github.com/repos/{{$json.body.sender.login}}/{{$json.body.repository.name}}/pulls/{{$json.body.number}}/files. - Keep Options as default unless you need custom headers.
Step 3: Set Up the Review Prompt and AI Agent
Build the prompt from the diffs and configure the AI agent with the OpenAI model and optional guidelines tool.
- Add Assemble Review Prompt and connect it after Fetch PR File Diffs.
- Keep the provided JavaScript Code to generate
user_messagefrom file patches. - Add Code Review Assistant and connect it after Assemble Review Prompt.
- Set Text to
{{ $json.user_message }}and keep Prompt Type asdefine. - Open OpenAI Chat Engine and select model
gpt-4o-mini. - Credential Required: Connect your
openAiApicredentials in OpenAI Chat Engine. - Open Guidelines Sheet Lookup and set Document ID and Sheet Name to your review rules source.
- Credential Required: Connect your
googleSheetsOAuth2Apicredentials. This tool is attached to Code Review Assistant, so ensure credentials are available to the parent agent connection.
Step 4: Configure Review Output Actions
Send the AI-generated review as a GitHub review comment and apply a label to the pull request.
- Add Post Review Comment and connect it after Code Review Assistant.
- Set Resource to
reviewand Event tocomment. - Set Body to
{{ $json.output }}. - Set Pull Request Number to
{{ $('Pull Request Trigger').first().json.body.number }}. - Credential Required: Connect your
githubApicredentials. - Add Apply Review Label and connect it after Post Review Comment.
- Set Operation to
editand Authentication tooAuth2. - Under Edit Fields → Labels, add
ReviewedByAI. - Set Issue Number to
{{ $('Pull Request Trigger').first().json.body.number }}. - Credential Required: Connect your
githubOAuth2Apicredentials.
$json.output. Ensure the agent returns output text; otherwise the review will be empty.Step 5: Test and Activate Your Workflow
Verify the workflow behavior with a real pull request and then enable it for production use.
- Click Execute Workflow and trigger a pull request event in your GitHub repository.
- Confirm that Fetch PR File Diffs returns file data and Assemble Review Prompt produces a
user_message. - Check that Post Review Comment adds a review comment and Apply Review Label adds
ReviewedByAI. - If all steps succeed, toggle the workflow to Active for continuous use.
Common Gotchas
- GitHub credentials can expire or need specific permissions. If things break, check the n8n Credentials page and confirm your token can read PRs and create PR comments.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About an hour if your GitHub and OpenAI credentials are ready.
No. You’ll connect accounts and adjust a few fields in n8n. The only “code-like” part is editing the review prompt in plain English.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which are usually a few cents per review depending on diff size.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, but you’ll want to be intentional. Most teams create one workflow per repo (or per language) and point the “Guidelines Sheet Lookup” to the right tab in Google Sheets. You can also pass repo name into the prompt and load different guideline rows based on that value. Common tweaks include changing the tone of feedback, enforcing test coverage expectations, and adding “security checks” that reviewers often forget.
Usually it’s expired credentials or missing scopes on your GitHub OAuth app or PAT. Update the credential in n8n, then confirm the token can read pull requests, read files, and create PR comments. If commenting works but labeling fails, check that the repo allows the token to manage labels and that the label name matches exactly. Rate limiting can also show up if you run this across many repos at once, so spacing executions can help.
A lot, as long as your execution limits and OpenAI budget match your volume. On n8n Cloud, your monthly execution cap depends on the plan; self-hosting removes the platform cap and shifts the limit to your server capacity. In practice, the bottleneck is usually API rate limits or very large diffs, not n8n itself.
Often, yes. This workflow benefits from n8n’s branching and agent-style logic, plus the option to self-host when PR volume grows. Zapier or Make can still work, but you may hit limits once you start looping through files, merging context, and controlling formatting. Also, you’ll probably want tighter control over what gets sent to the model, which is easier when you can edit the prompt assembly step. If you’re unsure, Talk to an automation expert and we’ll map it to your repo volume and review process.
Consistent PR feedback shouldn’t depend on who has time that day. Set this up once, and every pull request gets a clear first-pass review you can trust.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.