BrowserAct + Google Sheets: funded leads, ready to call
Tracking funding announcements sounds simple until you’re juggling five tabs, a messy notes doc, and a spreadsheet that’s already full of duplicates. By the time you “finish” the list, the best leads have been contacted by someone else.
This funded lead automation hits sales teams and BD leads hardest. But founders doing scrappy outbound and analysts building market maps feel the same drag. The outcome is straightforward: a clean, deduped Google Sheet of recently funded companies, ready for outreach.
This workflow uses BrowserAct to scrape articles, an AI Agent (Gemini) to extract the company details, and Google Sheets to store it all. You’ll see how it works, what you need, and where teams usually get stuck.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: BrowserAct + Google Sheets: funded leads, ready to call
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n1@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code in JavaScript"]
n3@{ icon: "mdi:database", form: "rounded", label: "Append or update row in sheet", pos: "b", h: 48 }
n4@{ icon: "mdi:robot", form: "rounded", label: "Structured Output", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n6@{ icon: "mdi:brain", form: "rounded", label: "Gemini l", pos: "b", h: 48 }
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/slack.svg' width='40' height='40' /></div><br/>Send a message"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge"]
n9@{ icon: "mdi:database", form: "rounded", label: "Get row(s) in sheet", pos: "b", h: 48 }
n10@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n11@{ icon: "mdi:cog", form: "rounded", label: "Run a workflow Series 2", pos: "b", h: 48 }
n12@{ icon: "mdi:cog", form: "rounded", label: "Run a workflow Series 1", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Get workflow Series 2", pos: "b", h: 48 }
n14@{ icon: "mdi:cog", form: "rounded", label: "Get workflow Series1", pos: "b", h: 48 }
n5 --> n3
n8 --> n1
n1 --> n2
n6 -.-> n1
n7 --> n10
n10 --> n12
n10 --> n11
n4 -.-> n1
n2 --> n5
n9 --> n10
n14 --> n8
n13 --> n8
n12 --> n14
n11 --> n13
n3 --> n7
n0 --> n9
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n1,n4 ai
class n6 aiModel
class n5 decision
class n3,n9 database
class n2 code
classDef customIcon fill:none,stroke:none
class n2,n7,n8 customIcon
The Challenge: Funding news is scattered and hard to operationalize
Funding announcements don’t arrive in a neat, ready-to-call list. They show up across TechCrunch-style articles, press releases, local business journals, and roundups, and they’re written for humans, not CRMs. So you end up doing the same loop: search, skim, copy the company name, guess the industry, paste the URL, then try not to re-add a company you logged last week. The worst part is the mental load. You can’t tell if your “pipeline” is growing or just getting noisier.
It adds up fast. Here’s where it breaks down in real teams.
- Someone has to manually search for “Series A” and “Series B” articles over and over, and it turns into a recurring calendar chore.
- Copy-pasting into a sheet sounds harmless until the formatting drifts and you start losing URLs or mixing fields.
- Duplicates sneak in because people spell the same company name differently, which makes outreach tracking unreliable.
- Even when the lead is good, it often sits unnoticed because nobody gets notified when the list updates.
The Fix: Scrape funding announcements, extract leads, and upsert to Sheets
This workflow turns funding news into a working lead list. It starts by loading your search targets (keywords like “Series A” or “Series B”, plus locations) from Google Sheets, then loops through them to launch BrowserAct scraping jobs that pull relevant articles. Once BrowserAct finishes, everything gets merged into a single batch of content and handed to an AI Agent powered by Google Gemini. The AI reads the articles like a researcher would, then extracts structured fields such as company name, field of investment, and the source URL. Finally, the workflow formats the results, filters out empty hits, and upserts the remaining leads into Google Sheets by matching on “Company” so duplicates don’t pile up.
The workflow begins with a manual trigger (or a schedule if you swap in Cron). From there, BrowserAct does the collection work, Gemini handles the interpretation, and Google Sheets becomes your single, reliable list. A Slack message closes the loop so your team actually sees the update.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you track 10 keyword/location combos (for example: Series A + US, Series A + UK, Series B + US, and so on). Manually, it’s easy to spend about 10 minutes per combo searching, opening articles, and pasting notes, so that’s roughly 100 minutes per run, plus cleanup. With this workflow, you click once to run it, wait for BrowserAct to finish scraping (often 10–20 minutes), and the sheet is updated automatically. That’s about an hour back each run, and the data is actually usable.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- BrowserAct for scraping funding announcement pages.
- Google Sheets to store keywords and the lead list.
- Gemini account (get it from Google AI Studio) for the AI Agent analysis.
- Slack to notify your team when updates land.
Skill level: Intermediate. You’ll connect a few accounts, paste API credentials, and understand a basic sheet structure for inputs and outputs.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
You launch it (or schedule it). The workflow is set up with a manual trigger right now, which is perfect for testing. If you want it to run daily or weekly, you can replace the trigger with a Cron schedule.
Google Sheets provides the search plan. A sheet row list supplies the keywords and locations you care about, then the workflow iterates through them in batches so you don’t overload scraping or AI calls.
BrowserAct collects the articles. Two scraping paths run for “Series A” and “Series B”, and separate “await” steps monitor the jobs so the workflow doesn’t move on before results are ready.
Gemini extracts structured lead data. The AI Agent reads the combined articles and returns clean fields (company, investment focus, URL). A formatting step converts that into individual items, then a filter removes blanks.
Sheets is updated and Slack is notified. Each lead is upserted by company name to prevent duplicates, then your Slack channel gets a message that the list has changed.
You can easily modify the keyword list and the “match by” logic to fit your niche. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow with a manual trigger so you can test and run it on demand while setting up your data sources and scrapers.
- Add the Manual Launch Trigger node as your trigger.
- Keep the default settings (no fields required) to allow manual executions during setup.
Step 2: Connect Google Sheets
Fetch keyword rows for Series A and Series B and prepare the destination sheet for upserts.
- Open Fetch Sheet Rows and select the target spreadsheet for keywords using Document value
[YOUR_ID]. - Set the Sheet to
[YOUR_ID](the sheet labeled “Keywords For Funding Announcement to Lead List (TechCrunch)”). - Credential Required: Connect your
googleSheetsOAuth2Apicredentials in Fetch Sheet Rows. - Open Upsert Sheet Row and select the target spreadsheet using Document value
[YOUR_ID]. - Set Operation to
appendOrUpdateand map columns: Url to{{ $json.text.Url }}, Company to{{ $json.text.Company }}, and InvestedOn to{{ $json.text.InvestedOn }}. - Credential Required: Connect your
googleSheetsOAuth2Apicredentials in Upsert Sheet Row.
Step 3: Configure Scraping and Batching
Split the keyword rows into batches and launch Series A and Series B scraping tasks for each record.
- In Batch Iterate Records, keep the default settings to iterate over each row from Fetch Sheet Rows.
- Configure Start Scrape Series A with Workflow ID set to
[YOUR_ID]and input parameters: KeyWord to{{ $json["keyword Series A"] }}and Location to{{ $json.Geo }}. - Credential Required: Connect your
browserActApicredentials in Start Scrape Series A. - Configure Start Scrape Series B with Workflow ID set to
[YOUR_ID]and input parameters: KeyWord to{{ $json["keyword Series B"] }}and Location to{{ $json.Geo }}. - Credential Required: Connect your
browserActApicredentials in Start Scrape Series B. - Batch Iterate Records outputs to both Start Scrape Series A and Start Scrape Series B in parallel, launching both scraping tasks for each record.
- In Await Series A Task and Await Series B Task, keep Operation as
getTask, Max Wait Time as900, and Polling Interval as30, with Task ID set to{{ $json.id }}. - Credential Required: Connect your
browserActApicredentials in both Await Series A Task and Await Series B Task.
[YOUR_ID] with valid BrowserAct workflow IDs, or tasks will not run.Step 4: Set Up AI Analysis and Parsing
Use the AI agent to analyze scraped article text, then parse it into structured JSON output for downstream processing.
- Open Funding Analysis Agent and set Prompt to the provided multi-line instruction, keeping the expression
{{ $json.output.string }}intact. - Ensure Has Output Parser is enabled in Funding Analysis Agent.
- In Structured JSON Parser, set JSON Schema Example to
[{"Company":"<String>","InvestedOn":"<String>","Url":"<String>"}]. - Gemini Chat Model is connected as the language model for Funding Analysis Agent — ensure credentials are added to Gemini Chat Model.
- Credential Required: Connect your
googlePalmApicredentials in Gemini Chat Model. - Structured JSON Parser is an AI sub-node — add credentials to the parent node (Gemini Chat Model), not the parser.
Step 5: Format, Filter, and Notify
Format AI results into individual items, filter out non-matches, upsert the sheet, and post a Slack notification.
- In Format Result Items, keep the provided JavaScript to convert
$input.first().json.outputinto individual items. - Configure Company Presence Check with three OR conditions: Company, Url, and InvestedOn each equals
No Companyusing expressions{{ $json.text.Company }},{{ $json.text.Url }}, and{{ $json.text.InvestedOn }}. - Ensure the true branch of Company Presence Check is empty and the false branch routes to Upsert Sheet Row.
- In Post Slack Update, set Text to
The data for the lead announcement has been updatedand select the target Channel value[YOUR_ID]. - Credential Required: Connect your
slackOAuth2Apicredentials in Post Slack Update.
Step 6: Test and Activate Your Workflow
Run a manual test, verify outputs in Google Sheets and Slack, and then activate the workflow for production.
- Click Execute Workflow on Manual Launch Trigger to run a test.
- Confirm that Combine Results receives results from both Await Series A Task and Await Series B Task.
- Verify that Upsert Sheet Row writes or updates the Company, InvestedOn, and Url columns in your destination sheet.
- Check Slack to confirm Post Slack Update posts the update message in the selected channel.
- Toggle the workflow to Active once testing is successful.
Watch Out For
- Google Sheets permissions can block upserts. If rows aren’t appearing, check the connected Google account in n8n credentials and confirm the sheet is shared to it.
- If you’re using Wait-style “await scraping task” behavior in BrowserAct, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in the AI Agent are generic. Add your targeting rules (industries, minimum round size, exclusions) early or you will be editing outputs forever.
Common Questions
About an hour if your accounts are ready.
Yes. No coding is required, but someone should be comfortable connecting credentials and editing a Google Sheet that controls the searches.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in BrowserAct usage and Gemini model costs based on how many articles you analyze.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can swap the keyword/location sheet to match your niche, then adjust what the AI Agent extracts (for example, add “round size” or “investors” as fields). If you prefer a different dedupe rule, change the Google Sheets upsert to match on URL or “Company + Location” instead of only “Company”. Teams also commonly tighten the filter so only certain industries make it into the final sheet.
Usually it’s an expired or missing BrowserAct API key in n8n credentials. It can also be a workflow/template ID mismatch if your BrowserAct scraping template isn’t available in your account, or you’re hitting a temporary rate limit because too many scraping jobs were launched at once. Check the BrowserAct task status first, then re-run a single keyword to confirm the connection is healthy.
If you self-host n8n, there’s no execution cap (it mainly depends on your server and API limits). Practically, most teams run 10–50 keyword/location combos per day without drama, as long as BrowserAct and Gemini quotas are sized for it.
Often, yes. This workflow needs looping, waiting for scraping jobs to finish, merging results, and then transforming structured AI output before writing to Google Sheets. n8n handles that kind of multi-step logic cleanly, and self-hosting can keep costs predictable when you run it frequently. Zapier or Make can still work, but you may end up fighting limitations around long-running tasks and complex branching. If you’re unsure, Talk to an automation expert and we’ll sanity-check the simplest setup for your volume.
Once this is running, your “funding research” becomes a routine update, not a recurring scramble. The workflow handles the repetitive parts so your team can focus on timing, messaging, and closing the conversation.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.