🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

BrowserAct to Google Sheets, clean leads no duplicates

Lisa Granqvist Partner Workflow Automation Expert

Copying business listings out of directories is the kind of work that looks “quick” until you’ve done it 40 times. Tabs everywhere. Same company twice. A spreadsheet that’s already messy before you even start outreach.

Marketing managers trying to build fresh prospect lists feel it fast. A sales rep doing local outreach gets stuck in the same loop. And if you run an agency, you know how BrowserAct Sheets leads automation changes the pace when clients want “more leads by Friday.”

This guide shows how this n8n workflow scrapes directory results with BrowserAct, cleans the data, and writes deduped rows into Google Sheets so your lead list is ready to use.

How This Automation Works

Here’s the complete workflow you’ll be setting up:

n8n Workflow Template: BrowserAct to Google Sheets, clean leads no duplicates

Why This Matters: Lead Lists Get Messy Before You Even Start

Directory scraping sounds straightforward until you’re actually doing it. You search a category, open 10 listings, copy names, sites, phone numbers, then realize half the “unique” results are the same business with different formatting. After that, you still have to normalize columns, remove blank fields, and guess which row is the most accurate. The worst part is the mental load. You’re trying to do judgment-heavy work (who’s a fit?) while also doing clerical work (copy-paste, dedupe, cleanup), and those two tasks don’t mix well.

It adds up fast. Here’s where it breaks down in real life.

  • Every manual export turns into a cleanup session that can easily burn an hour per list.
  • Duplicates sneak in because “ACME Plumbing” and “Acme Plumbing LLC” look different to a human skimming quickly.
  • Directory pages change, so your “process” becomes a fragile set of tabs and habits.
  • Outreach slows down because the list isn’t trustworthy, which means you double-check rows instead of emailing.

What You’ll Build: Directory Scraping That Lands Cleanly in Sheets

This workflow gives you a repeatable way to generate local leads without babysitting a spreadsheet. You start it manually in n8n, set the inputs (the business category and city), and n8n hands the scraping job to BrowserAct using a purpose-built directory template (like YP.com). BrowserAct runs the scrape in the background, then n8n checks back to see when it’s finished. Once the results are ready, a small code step parses the raw JSON output into clean, individual business records. Finally, Google Sheets appends or updates rows and matches on “Company Name” so you don’t keep re-adding the same lead.

The workflow begins with a manual launch, then runs a “start scrape” request in BrowserAct and waits for completion. Next, it transforms the raw scraper output into usable rows. Google Sheets becomes the final destination, with dedupe logic built into the way rows are written.

What You’re Building

Expected Results

Say you build one list per day for local outreach: 50 businesses from a directory. Manually, grabbing basics (name, phone, site) and cleaning duplicates is often about 2 minutes per business, plus about 30 minutes fixing the sheet, so roughly 2 hours. With this workflow, you spend about 2 minutes entering the category and city, then wait for BrowserAct to finish, and the rows land in Google Sheets already structured and deduped. The “hands-on” part drops to minutes, not hours.

Before You Start

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • BrowserAct for running the directory scraping task.
  • Google Sheets to store and dedupe the lead list.
  • BrowserAct API key (get it from your BrowserAct account settings).

Skill level: Intermediate. You’ll connect credentials, paste a workflow, and tweak a couple of inputs, but you won’t be writing an app.

Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).

Step by Step

Manual launch with your targeting inputs. You run the workflow and set the business category and city location so the scrape matches the market you actually want.

BrowserAct runs the directory scrape. n8n sends a job to BrowserAct using the “Online Directory Lead Scraper (YP.com)” template, then checks back with a second BrowserAct step to confirm the job is done.

Raw results get turned into clean items. BrowserAct returns a single JSON string, so the Code step parses it and splits it into one business per item. This is what makes the spreadsheet write reliable.

Google Sheets becomes the source of truth. The final step updates or appends rows and matches on “Company Name,” which keeps your list clean when you run the same search again next week.

You can easily modify the category and location inputs to target a new niche or expand into nearby cities based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Manual Trigger

Set up the manual entry point so you can run the workflow on demand while testing the scrape and sheet updates.

  1. Add Manual Launch Trigger as the first node in the workflow.
  2. Connect Manual Launch Trigger to Initiate Scrape Task to start the scrape flow.

Step 2: Connect BrowserAct

These nodes kick off the scraping workflow and wait for completion before parsing results.

  1. Open Initiate Scrape Task and set workflowId to [YOUR_ID].
  2. Under inputParameters, confirm the values: business_category = dentists and city_location = Brooklyn.
  3. Credential Required: Connect your browserActApi credentials in Initiate Scrape Task.
  4. Open Await Task Completion and set taskId to {{ $json.id }}.
  5. Set operation to getTask, waitForFinish to true, maxWaitTime to 600, and pollingInterval to 20.
  6. Credential Required: Connect your browserActApi credentials in Await Task Completion.

Step 3: Set Up Transform Parsed Items

This node converts the scraper output string into individual items so each business can be mapped to a spreadsheet row.

  1. Add Transform Parsed Items and paste the provided JavaScript into jsCode.
  2. Ensure the code reads the input from $input.first().json.output.string as shown in the script.
  3. Connect Await Task Completion to Transform Parsed Items.

⚠️ Common Pitfall: If $input.first().json.output.string is missing or not valid JSON, the code throws an error and the workflow will stop.

Step 4: Configure Update Spreadsheet Rows

Append or update the parsed businesses in your Google Sheet using column mappings.

  1. Add Update Spreadsheet Rows and set operation to appendOrUpdate.
  2. Set documentId to [YOUR_ID] and sheetName to [YOUR_ID].
  3. Map columns using the defined schema: Company Name{{ $json.Name }}, Category{{ $json.Business }}, Phone Number{{ $json.Phone }}, Address{{ $json.Location }}.
  4. Set matchingColumns to Company Name so updates match existing rows.
  5. Credential Required: Connect your googleSheetsOAuth2Api credentials in Update Spreadsheet Rows.

Step 5: Test and Activate Your Workflow

Run a full test to confirm the scrape completes, data parses, and rows are written to your Google Sheet.

  1. Click Execute Workflow on Manual Launch Trigger to start a test run.
  2. Verify Await Task Completion returns task output and Transform Parsed Items outputs multiple items.
  3. Check the spreadsheet for newly appended or updated rows in the target sheet.
  4. When results look correct, toggle the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Troubleshooting Tips

  • BrowserAct credentials can expire or the API token may be revoked. If things break, check your BrowserAct API key in your BrowserAct account settings first, then re-save the credential in n8n.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Quick Answers

What’s the setup time for this BrowserAct Sheets leads automation?

About 30 minutes if your BrowserAct and Google credentials are ready.

Is coding required for this lead dedupe outcome?

No. You’ll use the included Code step as-is and only adjust inputs like category and location.

Is n8n free to use for this BrowserAct Sheets leads workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in BrowserAct usage costs based on how many scraping tasks you run.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I modify this BrowserAct Sheets leads workflow for different use cases?

Yes, and you probably should. Most people start by changing the BrowserAct template (directory source) and the input fields (category and city) in the “Run a workflow task” setup. You can also adjust the Google Sheets mapping if you want extra columns like email, rating, or address formatting. If your directory returns “Company Name” inconsistently, swap the dedupe field to a website domain or phone number to make matches more reliable.

Why is my BrowserAct connection failing in this workflow?

Usually it’s an expired or incorrect API key in your BrowserAct credential in n8n. Fix that first, then confirm the BrowserAct workflow template ID you’re calling still exists in your account. If the scrape starts but never completes, it can also be a long-running job and your “await completion” step needs a longer wait before checking status again.

What volume can this BrowserAct Sheets leads workflow process?

If you self-host n8n, there’s no execution limit (it mostly depends on your server and BrowserAct throughput), and most teams run a few scrapes per day without issue.

Is this BrowserAct Sheets leads automation better than using Zapier or Make?

Often, yes, because the BrowserAct community node and the “wait for task completion” pattern are simpler to control in n8n. You also get more flexible data shaping for the JSON parsing and row mapping, which matters when directory outputs vary. Another practical point: this workflow is designed for self-hosted n8n, so you can run a lot of lead pulls without counting every step as a paid task. Zapier or Make can still be fine for lightweight two-step workflows, but scraping plus parsing tends to get awkward there. Talk to an automation expert if you want a quick recommendation based on your lead volume and tools.

Once this is running, lead generation stops being a spreadsheet chore and starts looking like a repeatable system. You set the targeting, the workflow does the grunt work, and your list stays clean for the next campaign.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal