🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

GitHub to Google Sheets, ranked leads for hiring

Lisa Granqvist Partner Workflow Automation Expert

Recruiting on GitHub sounds simple until you do it for real. You search, click profiles, guess who’s legit, paste links into a spreadsheet, then lose the thread the next day.

This GitHub hiring automation hits technical recruiters first, but hiring managers and founders feel it too. You’ll end up with a clean Google Sheet of ranked leads (deduped), plus Slack messages that confirm updates so you’re not constantly double-checking.

Below is the workflow, what it fixes, and what you can expect when it runs on a schedule and quietly builds your pipeline in the background.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: GitHub to Google Sheets, ranked leads for hiring

The Problem: GitHub sourcing turns into messy, unscalable busywork

Finding great developers on GitHub isn’t the hard part. The hard part is turning “interesting profiles” into an organized, shareable pipeline you can actually act on. You open 30 tabs, skim repos, try to infer seniority from followers and activity, then copy-paste everything into a sheet that quickly becomes inconsistent. Add a second person sourcing and you start stepping on each other’s toes. Same candidate, two rows. Or worse, no row at all because “I thought you already logged them.”

It adds up fast. Here’s where it breaks down.

  • You spend about 5–10 minutes per profile just collecting the basics, and that’s before you even judge fit.
  • Scoring is subjective, so the “best” leads depend on who happened to be sourcing that day.
  • Duplicates creep in when names vary, profiles change, or someone logs a GitHub handle instead of a full name.
  • There’s no reliable “done” signal, which means constant checking and a lot of second-guessing.

The Solution: scheduled GitHub scraping + AI scoring into a deduped Google Sheet

This workflow turns talent sourcing into something that behaves more like a pipeline. It runs on a schedule (hourly, daily, whatever you choose), kicks off a BrowserAct scraping task on GitHub based on your criteria (think “Python developers in Berlin”), then waits for the scrape results to come back. Once the raw data lands, n8n transforms it into individual candidate items and sends each profile through an AI Agent. The AI evaluates what matters for hiring signals (profile summary quality, repos, followers, and more), then produces structured output including a weighted FinalScore.

Finally, the workflow writes each candidate to Google Sheets and uses the “Name” column to prevent duplicates. You also get Slack notifications: one for failures (so you don’t miss outages), and one that confirms the sheet was updated successfully.

What You Get: Automation vs. Results

Example: What This Looks Like

Say you review 25 GitHub profiles a day. If it takes roughly 8 minutes to open the profile, scan repos, sanity-check location, and log the lead, that’s about 3 hours of sourcing time. With this workflow, the “manual” part is closer to reviewing a ranked sheet and picking who to contact, maybe 20–30 minutes. The scrape and scoring run in the background on a schedule, and Slack tells you when new rows landed.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • BrowserAct for scraping GitHub by criteria
  • Google Sheets to store ranked, deduped leads
  • Slack for success and error notifications
  • Gemini API access (get it from Google AI Studio)

Skill level: Intermediate. You’ll connect accounts, set scraping criteria, and map a few fields into your sheet.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

A schedule triggers the run. Set it to hourly if you’re hiring aggressively, or daily if you just want a steady drip of fresh leads.

BrowserAct scrapes GitHub based on your filters. You define the search intent (language and location), then the workflow launches a scraping job and checks back for completion. If the task fails, Slack gets an error alert so you’re not waiting on missing leads.

n8n cleans the scrape output and prepares candidates. A transformation step splits the raw JSON into one item per developer so scoring and sheet updates are consistent.

The AI Agent scores and structures each profile. Powered by a Gemini chat model, it evaluates the profile and produces a normalized result (including a FinalScore) that’s easy to sort and review.

Google Sheets stores the pipeline, Slack confirms the update. New candidates are added as clean rows, and duplicates are avoided using the Name column. You can easily modify your search criteria and the scoring weights based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Scheduled Trigger

This workflow begins on a schedule, which kicks off the BrowserAct task automation.

  1. Add or open Scheduled Automation Trigger.
  2. Set the rule interval to run hourly by configuring Rule to {"interval":[{"field":"hours"}]}.
  3. Connect Scheduled Automation Trigger to Launch Workflow Task to initiate the task chain.

If you need a different schedule (e.g., daily), update the rule interval before testing.

Step 2: Connect BrowserAct Task Execution

The BrowserAct workflow runs the GitHub contributor search and returns the raw task output.

  1. Open Launch Workflow Task and set Workflow ID to [YOUR_ID].
  2. Configure Input Parameters such as Language = Python, Location = Berlin, Total_Page = 2, and Public_Repositories = 5.
  3. Credential Required: Connect your browserActApi credentials in Launch Workflow Task.
  4. Open Retrieve Task Details and set Task ID to {{ $json.id }}, Operation to getTask, Wait For Finish to true, Max Wait Time to 600, and Polling Interval to 20.
  5. Credential Required: Connect your browserActApi credentials in Retrieve Task Details.

⚠️ Common Pitfall: If Workflow ID is left as [YOUR_ID], BrowserAct will fail to start and the workflow will not return results.

Step 3: Set Up Data Transformation and AI Scoring

The task output is parsed into items, then scored with the AI agent and structured for downstream updates.

  1. In Transform JSON Script, keep the JavaScript code that parses $input.first().json.output.string into multiple items.
  2. Open Candidate Scoring Agent and set Text to the provided prompt containing expressions like {{ $json.Summary }}, {{ $json.Followers }}, and {{ $json.TotalRepo }}.
  3. Ensure Candidate Scoring Agent has Has Output Parser enabled and that Structured Result Parser is attached as the output parser.
  4. Open Structured Result Parser and set JSON Schema Example to the provided schema template.
  5. Open Gemini Conversation Model and attach it as the language model for Candidate Scoring Agent.
  6. Credential Required: Connect your googlePalmApi credentials in Gemini Conversation Model. The Structured Result Parser uses the parent agent and does not take credentials directly.

The Transform JSON Script throws an error if $input.first().json.output.string is missing—verify the BrowserAct output path matches this expected structure.

Step 4: Configure Output Updates and Notifications

Scored candidates are written to Google Sheets and a Slack notification is sent after updates.

  1. Open Update Spreadsheet Row and set Operation to appendOrUpdate.
  2. Set Document ID to [YOUR_ID] and Sheet Name to [YOUR_ID].
  3. Map columns to expressions: URL = {{ $json.output.URL }}, Name = {{ $json.output.Name }}, Score = {{ $json.output.Score }}, Folowers = {{ $json.output.Folowers }}, Location = {{ $json.output.Location }}, TotalRepo = {{ $json.output.TotalRepo }}.
  4. Set Matching Columns to Name so updates match existing rows.
  5. Credential Required: Connect your googleSheetsOAuth2Api credentials in Update Spreadsheet Row.
  6. Open Notify Slack Channel and set Text to GitHub Contributors Updated, then choose your Slack channel.
  7. Credential Required: Connect your slackOAuth2Api credentials in Notify Slack Channel.

⚠️ Common Pitfall: If the sheet column names do not exactly match Name, Location, TotalRepo, Folowers, URL, and Score, the update can fail or write empty values.

Step 5: Add Error Handling Alerts

The workflow routes task retrieval errors to a Slack alert for visibility.

  1. Confirm Retrieve Task Details uses the error output to connect to Post Error Alert.
  2. Open Post Error Alert and set Text to BrowserAct Workflow Faces Problem.
  3. Select the appropriate Channel where error alerts should appear.
  4. Credential Required: Connect your slackOAuth2Api credentials in Post Error Alert.

Errors in Retrieve Task Details are routed without stopping the workflow thanks to its error output configuration.

Step 6: Test and Activate Your Workflow

Validate each part of the automation before enabling it on a live schedule.

  1. Click Execute Workflow to run Scheduled Automation Trigger manually.
  2. Confirm Launch Workflow Task and Retrieve Task Details return a task with an output.string payload.
  3. Verify Transform JSON Script outputs multiple items and Candidate Scoring Agent returns structured results through Structured Result Parser.
  4. Check that Update Spreadsheet Row writes or updates rows and Notify Slack Channel posts GitHub Contributors Updated.
  5. If needed, force an error in Retrieve Task Details to validate Post Error Alert notifications.
  6. Once verified, toggle the workflow to Active to run on schedule.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • BrowserAct credentials can expire or require the right workspace permissions. If things break, check your BrowserAct API key and the BrowserAct task history first.
  • If you’re using Wait-style polling (like “get details” after launching a scrape), processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your hiring rubric and “what good looks like” early or you’ll be editing outputs forever.

Frequently Asked Questions

How long does it take to set up this GitHub hiring automation automation?

About an hour if you already have BrowserAct, Sheets, and Slack ready.

Do I need coding skills to automate GitHub hiring automation?

No. You’ll mostly connect accounts and adjust the search criteria and sheet columns. A small amount of comfort reviewing a “transform JSON” step helps, but you’re not writing an app.

Is n8n free to use for this GitHub hiring automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Gemini usage, which is usually a few cents per batch depending on how many profiles you score.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this GitHub hiring automation workflow for scoring different roles (like frontend vs. data)?

Yes, and it’s the part you should customize first, honestly. You can adjust the AI Agent’s scoring rubric to weight different signals (for example, emphasize TypeScript repos for frontend, or data tooling and notebooks for data roles). If you change what fields you want to store, update the Structured Result Parser to match. Then map the new fields into the Google Sheets “Update Spreadsheet Row” node so your sheet stays sortable.

Why is my Google Sheets connection failing in this workflow?

Usually it’s expired Google credentials or the wrong spreadsheet permissions. Reconnect Google Sheets in n8n, confirm the account can edit the target file, and double-check the sheet/tab name matches what the node expects. If it fails only sometimes, you may also be hitting rate limits when many candidates are processed at once.

How many candidates can this GitHub hiring automation automation handle?

If you self-host, it mostly depends on your server size and how fast BrowserAct and the AI model return results.

Is this GitHub hiring automation automation better than using Zapier or Make?

For this use case, usually yes, because you’re doing more than moving data between two apps. You need scheduled runs, task polling (“launch scrape” then “get details”), transformation logic, and structured AI output that stays consistent as you scale. n8n handles branching and parsing without turning every extra step into a pricing surprise, and self-hosting keeps execution volume flexible. Zapier or Make can still work if you simplify the workflow and accept less control over scoring. If you want help deciding, Talk to an automation expert.

Once this is running, your “sourcing” becomes review and action, not tab-hoarding and copy-paste. The workflow handles the repetitive stuff. You handle the hiring calls.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Launch login modal Launch register modal