Bright Data to Google Sheets, pricing change alerts
Manually checking competitor pricing pages is the kind of “quick task” that quietly turns into a weekly time sink. You look at a page, second-guess what changed, paste notes into a doc, and still miss the moment pricing actually shifts.
Marketing managers feel this when messaging needs to react fast. A product lead needs clean history, not screenshots. And if you run a small agency, pricing change alerts matter because clients ask, and they expect answers today. This pricing change alerts automation keeps a tidy log in Google Sheets and only updates when something really changes.
You’ll set up an n8n workflow that scrapes a competitor page via Bright Data, uses AI to extract structured pricing details, compares against your sheet, and writes a new record only when the workflow detects a real difference.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: Bright Data to Google Sheets, pricing change alerts
flowchart LR
subgraph sg0["⏰ Trigger: Check Job Listings Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "⏰ Trigger: Check Job Listings", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "🛠️ Set Search Parameters", pos: "b", h: 48 }
n2@{ icon: "mdi:brain", form: "rounded", label: "🧠 OpenAI: LLM Brain", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "MCP Client to Scrape as mark..", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "No Operation, do nothing", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "AI agent", pos: "b", h: 48 }
n6@{ icon: "mdi:database", form: "rounded", label: "Retrieve Pricing data", pos: "b", h: 48 }
n7@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If price changes", pos: "b", h: 48 }
n8@{ icon: "mdi:database", form: "rounded", label: "Update google sheet", pos: "b", h: 48 }
n9@{ icon: "mdi:robot", form: "rounded", label: "Auto-fixing Output Parser", pos: "b", h: 48 }
n10@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n5 --> n7
n7 --> n4
n7 --> n8
n10 -.-> n9
n6 --> n5
n2 -.-> n5
n11 -.-> n9
n9 -.-> n5
n1 --> n6
n0 --> n1
n3 -.-> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n5,n9,n11 ai
class n2,n10 aiModel
class n7 decision
class n6,n8 database
Why This Matters: Competitor Pricing Changes Are Easy to Miss
Pricing pages change in annoying ways. Sometimes it’s a real price increase. Other times it’s a renamed plan, a new footnote, a different billing toggle, or a “limited time” badge that quietly becomes permanent. When you track this manually, you end up with messy notes, inconsistent screenshots, and a timeline that no one trusts. And because it’s boring work, it tends to happen late. That’s when your sales team is already hearing objections you could have prepared for.
The friction compounds. Here’s where it breaks down.
- You waste about 20 minutes per competitor just confirming “nothing changed.”
- Copy-pasting plan details into docs leads to typos, missing add-ons, and confusing comparisons later.
- Teams over-alert themselves with constant notifications, so the one important change gets ignored.
- Some sites block scrapers or rate-limit you, which means your “monitoring” only works when the website feels like cooperating.
What You’ll Build: Bright Data Scraping + AI Extraction + Sheet Updates
This workflow runs on a schedule inside n8n and checks one (or many) competitor pricing URLs you define. When it runs, Bright Data fetches the page content in a way that’s far less likely to get blocked, then the workflow passes that content to an AI agent that extracts the pieces you actually care about, like plan names, monthly vs annual pricing, seat limits, and key notes. Next, n8n pulls your existing pricing history from Google Sheets and compares what it just extracted with the last stored version. If the data matches, nothing happens. If it’s different, the workflow updates your spreadsheet so your history stays clean and easy to audit.
The workflow starts with a scheduled check. Then it scrapes and normalizes the competitor page into structured pricing data. Finally, it makes a simple decision: write a new row (or update a record) only when a real change is detected.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you track 6 competitors and you do it twice a week. Manually, even a “quick check” is maybe 15 minutes per competitor once you open tabs, confirm billing toggles, and log notes, which comes out to about 3 hours weekly. With this workflow, you spend about 10 minutes up front setting URLs and sheet columns, then each run is automatic and your sheet only changes when pricing changes. You still review the sheet, but it’s a fast scan, not an investigation.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data for reliable access to pricing pages.
- Google Sheets to store history and comparisons.
- OpenAI API key (get it from your OpenAI dashboard).
Skill level: Intermediate. You’ll connect accounts, set a few fields, and be comfortable testing runs and reading execution logs.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A scheduled check kicks things off. The workflow runs on a cadence you control (daily, weekly, or whatever fits). It also sets up your lookup inputs so every run knows which competitor URL and sheet record to use.
The pricing page gets scraped safely. Bright Data fetches the competitor page content and outputs it in a consistent format (in this workflow, it’s scraped and converted into Markdown). That consistency matters because pricing pages are full of dynamic sections, popovers, and repeated labels.
AI extracts the pricing data you actually want. An AI agent reads the scraped content and pulls out structured fields like plan name, price, billing period notes, and anything else you define. There’s also an auto-repair parser step to fix messy outputs when the source page is weird or the content shifts.
Google Sheets decides if anything changed. The workflow fetches existing pricing records, compares them using an If decision, and then either does nothing or updates the sheet with the new snapshot. The “do nothing” path is intentional. Silence is a feature.
You can easily modify which competitors are checked and what fields are extracted based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Schedule Trigger
Set the workflow’s cadence so pricing checks run automatically each day.
- Add and open ⏰ Scheduled Listing Check.
- Set the schedule rule to run at Trigger At Hour
9. - Connect ⏰ Scheduled Listing Check to 🛠️ Configure Lookup Inputs.
Step 2: Connect Google Sheets
Pull the current pricing records and update the spreadsheet when changes are detected.
- Open Fetch Pricing Records and select your spreadsheet: set Document to
[YOUR_ID]and Sheet togid=0. - Credential Required: Connect your googleSheetsOAuth2Api credentials to Fetch Pricing Records.
- Open Modify Pricing Sheet and confirm Operation is
update, with Document[YOUR_ID]and Sheetgid=0. - Verify the update mappings in Modify Pricing Sheet, for example: 1 Plan →
{{ $json.output.plans[0].plan_name }}, 1 Pricing →{{ $json.output.plans[0].price }}, and row_number →2. - Credential Required: Connect your googleSheetsOAuth2Api credentials to Modify Pricing Sheet.
Step 3: Set Up AI Extraction and Parsing
Configure the AI pipeline that scrapes pricing, extracts plan data, and structures the output.
- In 🛠️ Configure Lookup Inputs, set url to
https://clickup.com/pricing. - Open AI Extraction Agent and set the Text field to
=Scrape Plan name and pricing from the url below url: {{ $('🛠️ Configure Lookup Inputs').item.json.url }}. - Ensure 🧠 OpenAI Reasoning Core is connected as the language model for AI Extraction Agent. Credential Required: Connect your openAiApi credentials.
- Confirm MCP Scrape to Markdown is connected as an AI tool to AI Extraction Agent, with toolName
scrape_as_markdownand toolParameters{{ /*n8n-auto-generated-fromAI-override*/ $fromAI('Tool_Parameters', ``, 'json') }}. Credential Required: Connect your mcpClientApi credentials (configure via the AI tool setup attached to AI Extraction Agent). - Check that Structured Output Decoder and Auto-Repair Output Parser are connected as the output parsing chain for AI Extraction Agent. These are AI sub-nodes, so credentials are managed through the parent LLM nodes.
- Ensure OpenAI Chat Engine is connected to Auto-Repair Output Parser as its language model. Credential Required: Connect your openAiApi credentials.
Step 4: Configure Decision Logic and Routing
Compare extracted prices to stored values and route accordingly.
- Open Price Change Decision and confirm the comparison conditions, such as leftValue
{{ $('Fetch Pricing Records').item.json['1 Pricing'] }}equals rightValue{{ $('AI Extraction Agent').item.json.output.plans[0].price }}. - Verify all four plan comparisons use the matching array indexes:
plans[0]throughplans[3]. - Confirm routing: Price Change Decision sends the first output to No-Op Placeholder and the second output to Modify Pricing Sheet.
{{ $('AI Extraction Agent').item.json.output.plans[1].price }}1. Remove the trailing 1 if you want a true equality check.Step 5: Test and Activate Your Workflow
Run a manual test to confirm data flows correctly before enabling the scheduled run.
- Click Execute Workflow to run ⏰ Scheduled Listing Check manually.
- Verify Fetch Pricing Records loads the current spreadsheet row and AI Extraction Agent outputs plan names and prices.
- If a change is detected, confirm Modify Pricing Sheet updates row
2with the new plan data. If no change is detected, confirm the run ends at No-Op Placeholder. - Activate the workflow by toggling Active so it runs daily at the scheduled hour.
Troubleshooting Tips
- Bright Data credentials can expire or require the right zone/config. If scraping suddenly fails, check the Bright Data dashboard and the MCP Client node credential settings first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Quick Answers
About an hour if your Bright Data, OpenAI, and Google Sheets accounts are ready.
No. You’ll mostly connect credentials and customize the extracted fields. The only “logic” is basic mapping and a change/no-change decision.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (often a few cents per run, depending on how long the scraped page is) and whatever you pay for Bright Data usage.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you probably should. You can swap the Schedule Trigger for a Webhook if you want on-demand checks, or feed competitor URLs from a Google Sheet instead of hardcoding them in the “Configure Lookup Inputs” node. Common customizations include extracting feature text (not just price), tracking add-ons separately, and writing changes to a second sheet for weekly summaries.
Usually it’s credentials or configuration. Re-check the Bright Data zone/settings used by the MCP scrape node, then confirm your account has access to that product and enough balance for requests. If the scrape works sometimes but not always, the target site may be returning a challenge page, which means you should adjust the Bright Data configuration and test the scraped output before it reaches the AI extraction step.
If you self-host n8n, there’s no execution limit (it mainly depends on your server and your Bright Data/OpenAI throughput).
Often, yes, because this isn’t a simple “if X then email” scenario. You’re scraping content, extracting structured fields with AI, repairing imperfect outputs, and making a compare decision before writing to a database-like sheet. n8n handles branching and data shaping comfortably, and self-hosting can keep execution costs predictable when checks run frequently. Zapier or Make can still work if you already have clean pricing data coming in from another source, but they get awkward once you need multi-step parsing and robust comparisons. If you’re torn, Talk to an automation expert and describe your competitors, check frequency, and how you want alerts delivered.
Once this is running, competitor monitoring stops being a recurring chore and becomes a quiet, reliable system. Your spreadsheet stays clean, and you only pay attention when there’s something worth reacting to.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.