🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

Google Sheets + ScrapingBee: enriched leads, ready

Lisa Granqvist Partner Workflow Automation Expert

Your lead sheet looks “full,” but half the rows are dead ends. Wrong websites, missing contact pages, generic directories, and emails that bounce the moment you hit send.

This is the kind of mess that slows down marketing ops first. But agency owners building prospect lists and sales teams doing weekly outreach feel it too. With Sheets lead enrichment automation, you turn a basic “business type + city + state” row into a usable company site and real emails, without spending your afternoon in Google.

This workflow pulls leads from Google Sheets, searches with Serper.dev, scrapes likely pages via ScrapingBee, extracts email addresses, and writes everything back to your sheet with clear status updates.

How This Automation Works

See how this solves the problem:

n8n Workflow Template: Google Sheets + ScrapingBee: enriched leads, ready

The Challenge: Turning “leads” into contacts you can actually email

Building a lead list is easy. Building one that’s outreach-ready is where the time disappears. You start with a few columns in Google Sheets, then you open a new tab for every row: search the company, guess which site is real, click around for a contact page, and copy-paste anything that looks like an email. Now multiply that by 200 rows. Mistakes creep in fast, and frankly it’s mentally exhausting because every business formats their site differently and directories keep showing up in search results.

It adds up fast. Here’s where it breaks down in real life.

  • You waste about 5–10 minutes per lead just figuring out the “real” website versus listings and aggregator pages.
  • People copy the wrong URL into the sheet, and that one bad field poisons your entire outreach sequence.
  • Email hunting becomes inconsistent, so one person finds great contacts while another finds nothing and nobody knows why.
  • Status tracking is usually manual, which means duplicates, skipped rows, and “Did we already do this?” meetings.

The Fix: Google Sheets lead enrichment with Serper.dev + ScrapingBee

This workflow starts inside your Google Sheet. When you “activate” a row, it first checks that the basics are present (business type, city, state). If something is missing, it flags the row so you don’t waste cycles on junk inputs. If the row looks good, it marks the status as Running, prepares search inputs like country and language, then queries Serper.dev to find likely company websites. Next, it generates a set of “site variants” and candidate contact pages, validates those URLs, and sends the best options to ScrapingBee for scraping. Emails are extracted from the scraped pages, checked against what you already have in the sheet, and then written back in a clean comma-separated format. Finally, the row gets marked Finished so your list stays organized.

The workflow kicks off from a Sheets Row Trigger. From there, Serper.dev is used to locate the best company pages, and ScrapingBee handles the messy part of pulling content reliably. The output is simple: updated columns in Google Sheets (company name, URL, emails, and status) so your outreach list stays ready to use.

What Changes: Before vs. After

Real-World Impact

Say you enrich 100 leads every week. Manually, you might spend about 8 minutes per lead between searching, clicking, and hunting for a usable email, which is roughly 13 hours of busywork. With this workflow, you activate rows in Google Sheets and let it run: a minute to set up the queue, then the automation searches, validates, scrapes, and updates the sheet while you do other work. Even if you still review the results for a few minutes at the end, you’re usually saving most of that day.

Requirements

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • Google Sheets to store leads and results.
  • Serper.dev to search for real company websites.
  • ScrapingBee to scrape pages and extract emails.
  • Google Sheets API credentials (get them from Google Cloud Console).
  • Serper.dev API key (get it from your Serper.dev dashboard).
  • ScrapingBee API key (get it from your ScrapingBee dashboard).

Skill level: Beginner. You’ll connect accounts, paste API keys, and match a few Google Sheets columns.

Need help implementing this? Talk to an automation expert (free 15-minute consultation).

The Workflow Flow

A Google Sheets row gets “activated.” The trigger watches your sheet for rows you want processed, so you control what runs and when (handy when you’re cleaning up inputs first).

Basic input validation happens immediately. If business type, city, or state is missing, the workflow writes a “Missing information” status and moves on. No wasted API calls.

Serper.dev finds likely company pages. n8n sends a search request, parses the results, and appends research rows so the workflow can test multiple candidates instead of trusting the first link it sees.

URLs are validated, then ScrapingBee scrapes the best options. The workflow checks that pages respond properly, scrapes content, and extracts email addresses. If emails exist, it looks up what you already have and updates the record.

Google Sheets is updated and the row is finalized. You get company name, URL, comma-separated emails, and a Finished status so your sheet stays clean.

You can easily modify country, language, or result count to fit different regions and niches. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Sheets Row Trigger

Set up the trigger to watch for row updates in your input sheet.

  1. Add the Sheets Row Trigger node and set Event to rowUpdate.
  2. Set Columns To Watch to Activate.
  3. Set Poll Times to everyMinute.
  4. Select the target Document and Sheet Name for your input sheet.
  5. Credential Required: Connect your googleSheetsTriggerOAuth2Api credentials.

Step 2: Connect Google Sheets for Status and Data Operations

Configure the Google Sheets nodes that update status and write research data.

  1. In Mark Status Running, set Operation to update, map Client to {{ $json.Client }}, and set Status to Running.
  2. In Flag Missing Data, set Operation to update, map Client to {{ $json.Client }}, and set Status to Missing data.
  3. In Append Research Rows, set Operation to append and map: City to {{ $('Assign Search Inputs').item.json.city }}, State to {{ $('Assign Search Inputs').item.json.state }}, Client to {{ $json.client }}, Company to {{ $json.company }}, and website to {{ $json.Website }}.
  4. In Lookup Existing Emails, set the filter to lookup Company with {{ $json.company }}.
  5. In Update Email Records, set Operation to update and map emails to {{ $json.emails ? $json.emails + ", " + $('Extract Email Addresses').item.json.email : $('Extract Email Addresses').item.json.email }}, and Company to {{ $('Extract Email Addresses').item.json.company }}.
  6. In Mark Status Finished, set Operation to update, map Client to {{ $('Generate Site Variants').item.json.client }}, and set Status to Finished.

Credential Required: Connect your googleSheetsOAuth2Api credentials to Mark Status Running and Flag Missing Data.

⚠️ Common Pitfall: The other Google Sheets nodes (Append Research Rows, Lookup Existing Emails, Update Email Records, Mark Status Finished) also require Google Sheets credentials, but none are configured. Add the same googleSheetsOAuth2Api credentials to each of them.

Step 3: Set Up Validation and Search Input Preparation

Validate incoming rows and build search inputs for the Serper query.

  1. In Input Validation, ensure the three conditions check for non-empty values: {{ $json.Client }}, {{ $json.City }}, and {{ $json.State }}.
  2. Confirm Input Validation routes valid rows to Mark Status Running and invalid rows to Flag Missing Data.
  3. In Assign Search Inputs, enable Keep Only Set and set fields: state to {{ $('Sheets Row Trigger').item.json.State }}, city to {{ $('Sheets Row Trigger').item.json.City }}, client to {{ $('Sheets Row Trigger').item.json.Client }}, business_type to {{ $node["Sheets Row Trigger"].json["Business Type"] }}, country to Argentina, country_code to AR, language to es-419, and result_count to 10.

Step 4: Configure Search and Link Parsing

Query Serper and filter the organic results into company candidates.

  1. In Serper Search Request, set URL to https://google.serper.dev/search and Request Method to POST.
  2. Enable JSON Parameters and set Body Parameters JSON to { "q": "{{ $json.business_type }} in {{ $json.city }}, {{ $json.state }}, {{ $json.country }}", "num": {{ $json.result_count }}, "gl": "{{ $json.country_code }}", "hl": "{{ $json.language }}" }.
  3. Credential Required: Connect your httpHeaderAuth credentials for the Serper API.
  4. Keep Parse Company Links as-is to filter out blacklisted results and map company, Website, client, state, and city values.

Step 5: Generate URL Variants and Batch Processing

Create multiple contact/support URL variants and iterate through them for scraping.

  1. Keep Append Research Rows connected after Parse Company Links to log candidate companies before scraping.
  2. In Generate Site Variants, keep the JavaScript that builds multiple URL paths for each website.
  3. In Batch Iterator, leave the default batching options unless you need to control throughput.
  4. Confirm the flow: Append Research RowsGenerate Site VariantsBatch IteratorValidate Page URLs.

⚠️ Common Pitfall: If your input sheet uses different column names (e.g., “website” vs “Website”), adjust the mapping in Append Research Rows and Generate Site Variants accordingly.

Step 6: Configure Scraping, Parsing, and Email Updates

Validate pages, scrape HTML, extract emails, and update existing records.

  1. In Validate Page URLs, set URL to {{ $('Generate Site Variants').item.json.Website }}.
  2. In Scrape Result Check, keep the condition that checks {{ $json.error.message }} is empty before continuing to scrape.
  3. In ScrapingBee Request, set URL to https://app.scrapingbee.com/api/v1/?api_key=[CONFIGURE_YOUR_API_KEY]={{ $('Generate Site Variants').item.json.Website }}&render_js=true and replace [CONFIGURE_YOUR_API_KEY] with your ScrapingBee key.
  4. In Scrape Success Check, keep the condition that checks {{ $json.error.message }} is empty before extracting emails.
  5. In Extract Email Addresses, keep the JavaScript that extracts and deduplicates emails from the HTML data field.
  6. In Email Presence Check, use the not-empty condition on {{ $('Extract Email Addresses').item.json.email }} to decide whether to update.
  7. Confirm the update flow: Email Presence CheckLookup Existing EmailsUpdate Email RecordsDelay PauseBatch Iterator.

⚠️ Common Pitfall: The ScrapingBee URL includes a placeholder API key. If you leave [CONFIGURE_YOUR_API_KEY] unchanged, scraping will fail and the workflow will loop back via Batch Iterator.

Step 7: Test and Activate Your Workflow

Verify the full enrichment pipeline and then turn it on for production updates.

  1. Manually run the workflow with a test row update in the input sheet and ensure Sheets Row Trigger fires.
  2. Check that Mark Status Running updates the input sheet status to Running or Flag Missing Data updates it to Missing data when fields are empty.
  3. Verify Append Research Rows appends results to the data sheet and Update Email Records writes email values.
  4. Confirm Mark Status Finished sets the final status to Finished after batch processing.
  5. Activate the workflow by toggling the Active switch in n8n.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Watch Out For

  • Google Sheets credentials can expire or need specific permissions. If things break, check the n8n credential connection and your Google Cloud OAuth consent/settings first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Common Questions

How quickly can I implement this Sheets lead enrichment automation?

About 30 minutes if your API keys and Google Sheets access are ready.

Can non-technical teams implement this lead enrichment?

Yes. You won’t write code, but you will need to map your sheet columns and paste a couple of API keys into n8n.

Is n8n free to use for this Sheets lead enrichment workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Serper.dev and ScrapingBee usage (both have free tiers, then usage-based pricing).

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

How do I adapt this Sheets lead enrichment solution to my specific challenges?

Start by making country, country_code, language, and result_count come from columns in your sheet, so each row can control how it searches. You can also expand the blacklist logic in the “Generate Site Variants” / filtering code to avoid directories you hate. If you want more than emails, extend the “Extract Email Addresses” code to capture phones or social links and write them back as new columns.

Why is my Google Sheets connection failing in this Sheets lead enrichment workflow?

Usually it’s expired Google OAuth credentials or the wrong Google account connected to the n8n credential. Reconnect Google Sheets in n8n, then confirm the sheet is shared with that account and the correct spreadsheet is selected. If only updates fail, check that your column names match what the workflow expects (including the activate field).

What’s the capacity of this Sheets lead enrichment solution?

If you self-host, there’s no execution cap (it’s mainly your server and API limits). On n8n Cloud, capacity depends on your plan’s monthly executions. Practically, this workflow is gated by Serper.dev and ScrapingBee rate limits, so most teams run it in batches of a few dozen to a few hundred leads at a time.

Is this Sheets lead enrichment automation better than using Zapier or Make?

Often, yes, because this flow relies on branching logic (multiple checks), looping through candidate URLs, and code-based parsing, which gets awkward and pricey in many no-code tools. n8n handles split-in-batches loops cleanly, and you can self-host for unlimited executions. Zapier or Make can still be fine if your process is “search once, store one result,” but this workflow is built for real-world messiness: duplicates, bad URLs, and multiple pages per company. One more thing: keeping status fields like Running and Finished inside Google Sheets makes ops handoffs easier, and n8n fits that pattern well. If you’re on the fence, Talk to an automation expert and we’ll sanity-check your setup.

Once this is running, your sheet stops being a wish list and starts being an outreach queue. The workflow handles the repetitive digging so you can focus on the message and the offer.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal