OpenAI + Google Sheets: ranked CV shortlists fast
CV screening sounds simple until you have 25 PDFs, one job description, and a hiring manager asking for “top 5 by tomorrow.” Then it turns into copy-paste chaos, inconsistent gut calls, and notes that don’t match what you said in the last hiring round.
This is CV screening automation built for recruiters trying to move faster, founders doing hiring on the side, and HR managers who need decisions they can defend. You get a ranked shortlist with fit scores, strengths, weaknesses, and a clear recommendation, without re-reading the same resume twice.
Below, you’ll see how the workflow runs in n8n, what it outputs, and what you need to set it up so screening stays consistent even when volume spikes.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: OpenAI + Google Sheets: ranked CV shortlists fast
flowchart LR
subgraph sg0["AI Recruiter Agent Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Respond to Webhook"]
n2@{ icon: "mdi:robot", form: "rounded", label: "AI Recruiter Agent", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse Recruiter Output"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>List_File"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Detect PDF Type"]
n6@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Extract from File", pos: "b", h: 48 }
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Convert Base64 to Binary"]
n10@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n11@{ icon: "mdi:cog", form: "rounded", label: "Replace Me", pos: "b", h: 48 }
n12["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Reattach_Metadata_After_Extr.."]
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Combine_Candidates_For_AI"]
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Preprocess_CV_Names"]
n6 --> n9
n0 --> n4
n4 --> n5
n11 --> n10
n5 --> n6
n10 --> n8
n10 --> n11
n8 --> n12
n7 -.-> n2
n2 --> n3
n14 --> n2
n3 --> n1
n9 --> n10
n13 --> n14
n12 --> n13
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n2 ai
class n7 aiModel
class n6 decision
class n0,n1 api
class n3,n4,n5,n9,n12,n13,n14 code
classDef customIcon fill:none,stroke:none
class n0,n1,n3,n4,n5,n9,n12,n13,n14 customIcon
The Problem: CV Screening Gets Messy Fast
Manual screening breaks down the moment candidates arrive in mixed formats. Some CVs are clean text. Others are PDFs with odd layouts, missing headings, or names that don’t match the file name. You end up opening each file, hunting for experience, then trying to compare candidates in your head while your notes live in three places. The worst part is consistency. Two people can read the same CV and score it completely differently, which leads to extra meetings and second-guessing.
It adds up fast. Here’s where it usually goes sideways.
- Reading and re-reading CVs eats a few hours per role, and that’s before you even start shortlisting.
- PDF extraction is unreliable, so “quick scoring” turns into manual cleanup.
- Hiring feedback becomes subjective because there’s no shared rubric tied to the job description.
- Your shortlist isn’t audit-friendly, which makes approvals and stakeholder alignment slower than it should be.
The Solution: OpenAI Scoring + Google Sheets Ranking
This workflow turns a pile of CVs and one job description into a structured shortlist you can actually use. It starts when you upload a JD and multiple CV files through a webhook (PDF or text). The workflow detects which file is the JD versus candidate documents, then extracts text from PDFs so everything is comparable. Next, an AI recruiter agent evaluates each candidate against the JD using a consistent rubric, generating a fit score plus strengths, weaknesses, and a recommendation. Finally, results are returned in a clean response and can be logged to Google Sheets so your shortlist becomes a living dashboard instead of a one-off message in Slack.
The workflow begins with a webhook upload of the JD and CVs. It then normalizes file names, extracts text (including PDFs), and batches candidates for scoring. OpenAI produces structured evaluation data, and the workflow outputs a ranked set of results ready for reporting and sheet-based review.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you’re hiring for one role and you receive 20 CV PDFs. Manually, even a quick first-pass scan at about 8 minutes per CV is roughly 3 hours, and you still have to write notes and rank candidates afterward. With this workflow, you upload the JD plus the 20 CVs once (a couple minutes), then wait while extraction and scoring runs (often around 15–20 minutes depending on PDFs and model speed). You get a ranked list with consistent reasoning, and you can drop it into Google Sheets immediately.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- OpenAI for scoring and recruiter-style analysis
- Google Sheets to log results and rank candidates
- OpenAI API key or n8n AI Agent credential (get it from your OpenAI dashboard or n8n credentials)
Skill level: Intermediate. You’ll connect credentials, test the webhook upload, and tweak the evaluation prompt to match your role requirements.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A webhook upload kicks everything off. You submit one job description file and a batch of candidate CV files (PDF or text). The workflow immediately acknowledges the request so you’re not waiting in a spinning browser tab.
The workflow identifies what’s what. It collects file entries, detects formats, and separates the JD from CVs. If a CV is a PDF, it converts the base64 payload into a binary file so it can be extracted reliably.
Candidate text is extracted and normalized. PDFs go through text extraction, then the workflow reattaches metadata (like the original file name) and normalizes candidate naming so results don’t come back as “document (3).pdf.” Small detail. Big quality-of-life improvement.
OpenAI evaluates each candidate in batches. The AI recruiter agent compares each CV against the JD and returns structured fields such as fit score, strengths, weaknesses, and a recommendation. A code step then analyzes the recruiter results so the final response is consistent and easy to store.
You can easily modify the scoring criteria to match your hiring rubric based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
Set up the inbound webhook that receives the JD text and CV files from your form or client app.
- Add the Incoming Webhook Trigger node and set HTTP Method to
POST. - Set Path to
chat-new. - Set Response Mode to
responseNodeso the workflow replies through Return Webhook Response. - Connect Incoming Webhook Trigger to Collect File Entries.
body.message (JD text) and body.files[] (base64 files) to validate parsing in Collect File Entries.
Step 2: Connect CV File Intake and PDF Detection
Parse incoming files, detect PDF type, and route only text-based PDFs into the extraction pipeline.
- In Collect File Entries, keep the existing code to map
body.filesinto items containingjd,filename, andbase64. - In Identify PDF Format, keep Mode set to
runOnceForEachItemto classify each file astextorscan. - Configure Branch on PDF Text with Loose Type Validation set to
=={{ $json["pdf_type"] === "text" }}. - Connect Branch on PDF Text to Decode Base64 to Binary.
base64 is missing, Identify PDF Format returns pdf_type = unknown, so the file will not be processed. Validate your client payload before testing.
Step 3: Set Up PDF Extraction and Candidate Assembly
Convert base64 files into binary PDFs, extract text, and rebuild metadata into a candidate list for evaluation.
- In Decode Base64 to Binary, keep Mode set to
runOnceForEachItemand retain the binary mapping toapplication/pdf. - Connect Decode Base64 to Binary to Iterate Records Batch, then connect Iterate Records Batch to Extract PDF Text.
- In Extract PDF Text, set Operation to
pdf. - Connect Extract PDF Text to Rejoin Metadata After Extract, then to Assemble Candidates Payload, and then to Normalize CV Names.
- Leave Placeholder Step connected from Iterate Records Batch for debugging and future expansion.
Step 4: Configure the AI Evaluation Layer
Set up the AI evaluator to compare JD and CVs and produce structured scoring results.
- Open Recruitment AI Evaluator and keep the prompt in Text as-is to ensure structured scoring and strict name handling. The prompt uses
{{ $json.jd }}and{{ JSON.stringify($json.candidates) }}to pass the JD and candidates. - Ensure OpenAI Chat Engine is connected as the language model for Recruitment AI Evaluator.
- Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
- Connect Normalize CV Names to Recruitment AI Evaluator, then to Analyze Recruiter Results.
Step 5: Configure Output Response
Return the scoring results to the caller as the webhook response.
- In Analyze Recruiter Results, keep the JSON parsing and candidate ranking logic intact to produce
summary_textand candidate metrics. - In Return Webhook Response, set Respond With to
allIncomingItems. - Confirm Analyze Recruiter Results connects to Return Webhook Response.
Step 6: Test and Activate Your Workflow
Verify end-to-end processing before enabling the workflow in production.
- Click Execute Workflow and send a POST request to the Incoming Webhook Trigger URL with a sample JD in
body.messageand base64 PDFs inbody.files[]. - Confirm successful runs: Extract PDF Text outputs
text, Recruitment AI Evaluator returns scored candidates, and Return Webhook Response responds with a JSON payload includingsummary_text. - When testing is successful, toggle the workflow to Active to start processing live submissions.
Common Gotchas
- OpenAI credentials can expire or get blocked by missing billing/permissions. If things break, check your OpenAI API key status and usage limits in the OpenAI dashboard first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your OpenAI and Google Sheets access is ready.
No. You’ll mainly connect accounts and paste in your scoring criteria. If you want custom scoring formulas, a little JavaScript helps, but it’s optional.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which are usually a few cents per candidate depending on CV length and the model.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s one of the better use cases for this setup. You can adjust the prompt used by the AI recruiter agent to tell it which language to respond in, or to return bilingual fields for strengths and weaknesses. If you want different rubrics by language, add a simple “language detected” branch before the AI step and route candidates to separate prompt templates. Many teams also add one extra output column in Google Sheets for “language confidence” so reviewers know when to double-check.
Usually it’s an invalid or expired API key, or billing is not enabled on the OpenAI account. Update the credential in n8n and re-run with one candidate first to confirm it works. If you’re sending many long PDFs at once, rate limits can also cause intermittent failures, so batching (Split in Batches) and slightly slower runs help.
Dozens per run is normal, and larger batches work if you process them in smaller chunks.
Often, yes, because this workflow needs branching, batching, and file handling (PDF extraction plus structured AI output) that gets clunky and expensive in simpler tools. n8n also gives you more control over how candidate data is merged, named, and returned, which matters when you’re presenting results to stakeholders. Zapier or Make can still be fine if you only score one CV at a time and don’t care about clean, repeatable ranking logic. If your process includes “upload many files, score all, rank, then log,” n8n is usually the calmer option. Talk to an automation expert if you want a quick recommendation based on your volume.
Once this is running, screening stops being the bottleneck. You get cleaner shortlists, clearer decisions, and a workflow you can repeat for every role without reinventing your process.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.