Bright Data + Google Sheets, NPS trends you trust
You check reviews in three places, skim survey exports in another, then try to “eyeball” NPS in a spreadsheet that slowly turns into a mess. By the time you trust the number, it’s already old. And honestly, one wrong filter or copy-paste can swing the story you tell leadership.
This is where NPS trends automation pays off. Customer Experience leads feel it when they’re asked for a weekly pulse. Product managers need clean trendlines before roadmap debates. Agency owners reporting to clients get stuck reconciling “why this week looks weird.”
This workflow uses Bright Data to pull feedback reliably, has AI normalize what it finds, calculates NPS, then appends one clean row to Google Sheets every week. You’ll see what it fixes, what you need, and how it runs.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Bright Data + Google Sheets, NPS trends you trust
flowchart LR
subgraph sg0["⏰ Run Weekly NPS Tracker Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "⏰ Run Weekly NPS Tracker", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "✏️ Set Survey Page URL", pos: "b", h: 48 }
n2@{ icon: "mdi:robot", form: "rounded", label: "🧠 Scrape Reviews with Agent ..", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>📊 Calculate NPS from Ratings"]
n4@{ icon: "mdi:database", form: "rounded", label: "📄 Log NPS to Google Sheet", pos: "b", h: 48 }
n5@{ icon: "mdi:brain", form: "rounded", label: "🎯 Prompt & Guide Agent", pos: "b", h: 48 }
n6@{ icon: "mdi:cog", form: "rounded", label: "🌐 Execute Web Scrape (Bright..", pos: "b", h: 48 }
n7@{ icon: "mdi:robot", form: "rounded", label: "Auto-fixing Output Parser", pos: "b", h: 48 }
n8@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n9@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n8 -.-> n7
n9 -.-> n7
n7 -.-> n2
n5 -.-> n2
n0 --> n1
n1 --> n2
n3 --> n4
n2 --> n3
n6 -.-> n2
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n2,n7,n9 ai
class n5,n8 aiModel
class n4 database
class n3 code
classDef customIcon fill:none,stroke:none
class n3 customIcon
The Challenge: Weekly NPS Reporting That’s Actually Comparable
Weekly NPS sounds simple until you have more than one feedback source. Reviews live on public pages, surveys come from different tools, and scores show up in formats that don’t match. So you scrape, you export, you paste, you “clean it up,” and then you wonder if you missed something. The worst part is the mental load: every Monday (or Friday) you re-learn the same steps, double-check the same columns, and still hesitate before sending the update because one typo can create a fake trend.
It adds up fast. Here’s where it usually breaks down.
- Someone has to manually open multiple review pages and hunt for the latest scores, which is tedious and easy to postpone.
- Copy-pasting into Sheets introduces subtle errors like shifted columns, mixed date formats, and duplicated entries.
- When sources change their layout or block scraping, the whole “weekly number” quietly goes stale.
- Even when you get the data, turning it into a consistent NPS calculation is a recurring mini-project.
The Fix: Automated NPS Collection, Scoring, and Logging
This workflow turns a weekly NPS update into an automatic routine you can trust. It starts on a schedule, pulls the review or survey page you care about, then uses a Bright Data scraping tool so you’re less likely to get blocked or served incomplete pages. Next, an AI agent reads what comes back and extracts the pieces you actually need (scores, counts, and any structured signals you’ve defined). Once the scores are normalized, the workflow calculates NPS in code and appends a single, consistent row into Google Sheets. Your spreadsheet becomes the source of truth, not a collage of pasted snippets.
The workflow kicks off weekly, then moves through three phases: fetch the latest feedback, clean and interpret it with AI, and compute NPS from the extracted scores. Finally, it logs the result to Google Sheets so reporting is just “open the sheet and look.”
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you track NPS signals from three places each week (a review platform page, a survey summary, and a feedback form report). Manually, it’s often about 20 minutes per source to open, extract, paste, and sanity-check, which is roughly an hour every week. With this workflow, you spend maybe 10 minutes once setting the sources and your Google Sheet columns. After that, the scheduled run logs the new NPS row automatically, so your “weekly update” is basically opening the sheet.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data for reliable web scraping access.
- Google Sheets to store weekly NPS rows.
- OpenAI API key (get it from your OpenAI dashboard).
Skill level: Intermediate. You’ll connect accounts, paste credentials, and adjust a few fields like URLs and sheet columns.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A weekly schedule triggers the run. The workflow starts with a weekly scheduler, so you get a fresh NPS entry at the same cadence every time.
Your target review or survey link is defined. A simple “set fields” step stores the page URL (or URLs) you want to track, which keeps the rest of the workflow consistent.
Bright Data retrieves the page and AI extracts usable signals. The workflow uses the Bright Data scraping tool, then an AI agent and output parsers to turn messy page content into structured values like scores and counts.
NPS is computed and appended to Google Sheets. A code step calculates your NPS from the extracted scores, then a Google Sheets node appends one new row so your trendline grows over time.
You can easily modify the tracked sources and the spreadsheet columns to match your reporting format. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Schedule Trigger
This workflow runs weekly to collect reviews and compute an NPS score.
- Add the ⏰ Weekly NPS Scheduler trigger node.
- Set the weekly schedule so it triggers on Monday at 09:00 (based on the node’s weekly interval configuration).
- Connect ⏰ Weekly NPS Scheduler to ✏️ Define Review Page Link.
Step 2: Connect Google Sheets
This step defines where the computed NPS data will be stored.
- Open 📄 Append NPS to Sheet.
- Credential Required: Connect your googleSheetsOAuth2Api credentials.
- Set Operation to
append. - Set Spreadsheet to
[YOUR_ID]and Sheet toSheet1(gid0). - Map the columns to these expressions: Total Responses →
{{ $json.totalResponses }}, Promoters →{{ $json.promoters }}, Passive →{{ $json.passives }}, Detractor →{{ $json.detractors }}, NPS →{{ $json.nps }}, summary →{{ $json.message }}.
Step 3: Set Up the Review Source
Define the review page URL that the agent will scrape.
- Open ✏️ Define Review Page Link.
- Add a string field named url with value
https://www.trustpilot.com/review/shopify.com. - Confirm the connection from ✏️ Define Review Page Link to 🧠 Gather Reviews via Agent.
Step 4: Configure the AI Agent and Tools
These nodes scrape the review page and parse structured review data for scoring.
- Open 🧠 Gather Reviews via Agent and set the prompt text to
=Extract Customer reviews, Star ratings (1 to 5 stars), Comments (optional for deeper insight) and Date of review from the following url URL: {{ $json.url }}. - Open 🎯 Guide Agent Prompt and confirm the model is set to
gpt-4o-mini. Credential Required: Connect your openAiApi credentials. - Open 🌐 Run Web Scrape Tool and confirm Tool Name is
scrape_as_markdownand Tool Parameters is{{ /*n8n-auto-generated-fromAI-override*/ $fromAI('Tool_Parameters', ``, 'json') }}. Credential Required: Connect your mcpClientApi credentials. - Open Structured Data Parser and confirm the JSON schema example matches your expected review format.
- Open OpenAI Chat Engine and confirm the model is set to
gpt-4o-mini. Credential Required: Connect your openAiApi credentials. - Note that 🌐 Run Web Scrape Tool, Auto-Correct Output Parser, and Structured Data Parser are AI sub-nodes. Credentials must be added on the parent nodes (🎯 Guide Agent Prompt and OpenAI Chat Engine), not the sub-nodes.
Step 5: Compute the NPS Score
This step converts star ratings into an NPS score and prepares the output fields.
- Open 📊 Compute NPS from Scores and keep the JavaScript logic as provided for converting 1–5 stars into 0–10 NPS groups.
- Confirm the output fields include totalResponses, promoters, passives, detractors, nps, and message.
- Ensure the execution flow is 🧠 Gather Reviews via Agent → 📊 Compute NPS from Scores → 📄 Append NPS to Sheet.
items[0].json.output, the NPS calculation will return zero.Step 6: Test and Activate Your Workflow
Run a manual test to validate scraping, scoring, and sheet updates before turning on the schedule.
- Click Execute Workflow to run ⏰ Weekly NPS Scheduler manually.
- Verify that 🧠 Gather Reviews via Agent outputs structured review data and 📊 Compute NPS from Scores produces the NPS summary fields.
- Confirm a new row appears in the Google Sheet with the NPS data from 📄 Append NPS to Sheet.
- Toggle the workflow to Active to enable the weekly schedule in production.
Watch Out For
- Google Sheets credentials can expire or need specific permissions. If things break, check the Google connection status in n8n’s Credentials and confirm the sheet is shared with the right Google account.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About an hour if your accounts and sheet are ready.
Yes. You won’t write code, but you will copy credentials, choose URLs, and confirm your Google Sheet columns match the workflow.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage and your Bright Data plan, which depend on how many pages you scrape each week.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Start by swapping the tracked source in “Define Review Page Link” to your own review or survey URL, then adjust the agent instructions in the “Guide Agent Prompt” node so it extracts the fields you care about. If your scores come in a different scale, tweak the “Compute NPS from Scores” code to map them cleanly. Common customizations include tracking multiple URLs per run, logging extra columns (like volume of responses), and adding a second Google Sheet tab for monthly rollups.
Usually it’s an expired key or the Bright Data zone isn’t allowed for the target site. Update the credentials in the MCP Client tool node, then rerun a single test execution to see the raw response. If the scrape returns an empty page, the site may be serving a different layout or blocking that zone, so try a different target configuration inside Bright Data.
For most teams, this is “one weekly execution per source,” so volume is rarely the limiting factor. On n8n Cloud Starter you get a monthly execution cap, and self-hosting removes that cap (your server becomes the constraint). Practically, the bottleneck is scraping and AI parsing time, not the Sheets append.
Often, yes, because this flow mixes scraping, AI parsing, and custom computation, and that’s where Zapier or Make can get pricey or awkward. n8n handles branching logic and “do something only if the data looks right” checks without turning it into a tangled set of zaps. Also, self-hosting matters when you want full control and predictable runs. That said, if you only need “survey tool → Google Sheets,” Zapier or Make can be faster to start. Talk to an automation expert if you’re not sure which fits.
Once this is running, your weekly NPS update stops being a recurring chore and becomes a reliable habit. The workflow handles the repetitive parts so you can focus on what the trend means.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.