🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 21, 2026

Bright Data to Google Sheets, Amazon data captured

Lisa Granqvist Partner Workflow Automation Expert

Copying Amazon listings into a spreadsheet sounds simple. Then you do it for 50 products, across a few searches, and your “quick task” turns into a messy, error-prone chore.

E-commerce analysts feel it first, because price checks and review counts change constantly. Market researchers and agency teams deal with the same headache when they have to prove trends with clean data. This Amazon scraping Sheets automation takes you from URL list to structured rows, without the copy-paste roulette.

Below is the workflow, what it fixes, what you’ll need, and how to make it fit your tracking setup.

How This Automation Works

See how this solves the problem:

n8n Workflow Template: Bright Data to Google Sheets, Amazon data captured

The Challenge: Tracking Amazon listings without spreadsheet chaos

If you track Amazon search results for pricing, social proof, or competitor movement, the painful part isn’t “finding” the products. It’s everything after. You click into results, copy names, paste prices, eyeball ratings, then try to keep your sheet aligned while Amazon reshuffles the page. One missed column or one extra line break and your dataset is basically untrustworthy. The worst part is the mental load. You’re doing repetitive work while also trying to notice meaningful changes.

It adds up fast. Here’s where it usually breaks down in real teams.

  • Manual copying makes it easy to mix up products that look similar in the results list.
  • Amazon pages include lots of extra markup, so what you paste is often full of junk that ruins clean columns.
  • When you track multiple keywords, you end up with inconsistent formats across tabs, which means reporting takes longer than it should.
  • Doing “quick checks” every week quietly turns into a few hours of repetitive work.

The Fix: Bright Data scraping + AI extraction into Google Sheets

This workflow starts with a simple list of Amazon search result URLs in Google Sheets. When you run it, n8n pulls those URLs in batches, fetches each page’s raw HTML via Bright Data’s Web Unlocker, and then cleans the HTML so only the product-relevant elements remain. That cleaned page content is sent to an LLM (GPT-4 via an OpenRouter/LangChain setup) which extracts structured product fields like name, description, rating, review count, and price. Finally, n8n expands the extracted items and appends them into a results sheet, row by row, so you get a tidy table you can sort, filter, and chart. It’s a full loop from “URLs to track” to “results you can use.”

The workflow begins when you trigger it manually in n8n, which is handy for controlled runs. It reads URLs from Google Sheets, iterates through them in batches, scrapes each page through Bright Data, then uses AI to turn messy markup into consistent columns. The last step writes everything back to Google Sheets so your tracking stays centralized.

What Changes: Before vs. After

Real-World Impact

Say you track 20 Amazon search URLs each week, and you want five fields per product (name, description, rating, reviews, price). If you spend about 10 minutes per URL to copy, paste, and clean up columns, that’s roughly 3 hours weekly. With this workflow, you paste the URLs once, run it, and wait for the batch to finish. In practice, that’s usually a few minutes of setup time and then the results appear in Google Sheets without you touching each row.

Requirements

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • Google Sheets for URL input and results storage
  • Bright Data (Web Unlocker) to fetch Amazon HTML reliably
  • OpenRouter API key (get it from your OpenRouter dashboard)

Skill level: Intermediate. You’ll connect credentials and map Sheet IDs, but you won’t be writing an app from scratch.

Need help implementing this? Talk to an automation expert (free 15-minute consultation).

The Workflow Flow

A manual run kicks things off. You start the workflow in n8n when you’re ready to refresh your dataset (great for weekly tracking, audits, or client reporting days).

Google Sheets provides the target URLs. The workflow reads your “track” list, then moves through it using batching so you can scale up without hammering external services all at once.

Bright Data fetches the page, then the HTML gets cleaned. An HTTP request retrieves the raw markup, and a code step removes scripts, styling, and irrelevant tags so the AI sees a simpler page that focuses on product content.

AI turns messy markup into structured fields. The LangChain/OpenRouter GPT-4 setup extracts product details into a predictable JSON shape, which is then expanded into individual items so each product becomes a row.

Google Sheets receives the results. n8n appends the final rows into your results sheet, keeping your table clean for filtering, monitoring, and comparing week-over-week changes.

You can easily modify the extracted fields (like adding availability or SKU) to match what you report on. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Manual Trigger

This workflow starts manually so you can verify scraping and parsing results before running at scale.

  1. Add the Manual Execution Start node as the trigger.
  2. Leave default settings as-is (no parameters required).
  3. Connect Manual Execution Start to Retrieve Target URLs.

Step 2: Connect Google Sheets

Pull target URLs and save output to Google Sheets.

  1. Open Retrieve Target URLs and set Sheet Name to {{TRACK_SHEET_GID}}.
  2. Set Document ID to {{WEB_SHEET_ID}}.
  3. Credential Required: Connect your googleSheetsOAuth2Api credentials in Retrieve Target URLs.
  4. Open Append Results Sheet and set Operation to append.
  5. Set Sheet Name to {{RESULTS_SHEET_GID}} and Document ID to {{WEB_SHEET_ID}}.
  6. Map columns to values: name{{ $json.output.name }}, price{{ $json.output.price }}, rating{{ $json.output.rating }}, reviews{{ $json.output.reviews }}, description{{ $json.output.description }}.
  7. Credential Required: Connect your googleSheetsOAuth2Api credentials in Append Results Sheet.

⚠️ Common Pitfall: Ensure the sheet IDs in {{TRACK_SHEET_GID}} and {{RESULTS_SHEET_GID}} point to the correct tabs, not the document ID.

Step 3: Set Up URL Batching and Page Fetching

Batch through URLs, then fetch page HTML via Bright Data.

  1. Connect Retrieve Target URLs to Batch URL Iterator.
  2. Connect Batch URL Iterator to Fetch Page Content (main output).
  3. In Fetch Page Content, set URL to https://api.brightdata.com/request and Method to POST.
  4. Enable Send Body and Send Headers.
  5. Set body parameters: zoneweb_unlocker1, url{{ $json.url }}, formatraw.
  6. Set header parameter Authorization to {{BRIGHTDATA_TOKEN}}.

⚠️ Common Pitfall: If {{BRIGHTDATA_TOKEN}} is not set as an environment variable, the request will fail with an authentication error.

Step 4: Clean and Parse Product Data with AI

Clean HTML and extract structured product details using the LLM chain.

  1. Connect Fetch Page Content to Sanitize Markup.
  2. In Sanitize Markup, keep the provided JavaScript Code to generate cleanedHtml.
  3. Connect Sanitize Markup to Extract Product Data.
  4. In Extract Product Data, set Text to {{ $json.cleanedHtml }} and keep Prompt Type as define.
  5. Ensure the message references the keyword expression: {{ $(‘Batch URL Iterator’).item.json.url.split(’/s?k=’)[1].split(’&’)[0] }}.
  6. Connect OpenRouter Chat Engine to Extract Product Data as the language model.
  7. Connect Structured Result Parser to Extract Product Data as the output parser.

Credential Required: Add your OpenRouter credentials in OpenRouter Chat Engine. The Structured Result Parser is a sub-node—credentials should be added to OpenRouter Chat Engine, not the parser.

Step 5: Expand and Save Results

Split AI output into items, append to the results sheet, and loop through the next batch.

  1. Connect Extract Product Data to Expand Output Items.
  2. In Expand Output Items, set Field to Split Out to output and Include to allOtherFields.
  3. Connect Expand Output Items to Append Results Sheet.
  4. Connect Append Results Sheet back to Batch URL Iterator to continue batching.

Step 6: Test and Activate Your Workflow

Run a manual test to confirm scraping, parsing, and sheet output before enabling production use.

  1. Click Execute Workflow to start Manual Execution Start.
  2. Confirm Fetch Page Content returns HTML and Sanitize Markup outputs cleanedHtml.
  3. Verify Extract Product Data outputs structured items that match the schema in Structured Result Parser.
  4. Check Append Results Sheet for new rows with name, price, rating, reviews, and description.
  5. When successful, toggle the workflow to Active for production use (replace the manual trigger with a scheduled trigger if needed).
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Watch Out For

  • Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials screen and the Google account access prompt first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Common Questions

How quickly can I implement this Amazon scraping Sheets automation?

About 30 minutes if your keys and Sheets are ready.

Can non-technical teams implement this Amazon scraping Sheets automation?

Yes, but you’ll want someone comfortable with connecting accounts. The setup is mostly credentials, sheet IDs, and a quick test run.

Is n8n free to use for this Amazon scraping Sheets workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data usage and OpenRouter LLM costs per run.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

How do I adapt this Amazon scraping Sheets solution to my specific challenges?

You can edit the fields the AI extracts by adjusting the schema in the Structured Result Parser and the prompt inside the Extract Product Data step. Common tweaks include adding availability, keeping ASIN/SKU-like identifiers when present, or pulling seller/brand fields for competitor reports. If you want to scrape Walmart or eBay instead, you usually keep the same pattern: fetch HTML, sanitize it, then update the extraction instructions so the model knows what elements matter.

Why is my Bright Data connection failing in this workflow?

Usually it’s an invalid or expired Bright Data token, or the token isn’t selected in the HTTP Request credentials. It can also fail if the target URL is malformed, or if Bright Data throttles you after too many requests in a short window. Check the HTTP Request node’s execution output first; it will typically show an auth or rate-limit message you can act on.

What’s the capacity of this Amazon scraping Sheets solution?

It scales well because it processes URLs in batches, so you can run hundreds of URLs per job if your Bright Data and LLM quotas allow it.

Is this Amazon scraping Sheets automation better than using Zapier or Make?

For this use case, n8n is usually the better fit because you need batching, HTML cleanup, and an AI extraction step that benefits from more control. Zapier and Make can do HTTP calls, but long or messy HTML often becomes painful fast, and multi-item outputs can get expensive or awkward. n8n also gives you the option to self-host, which matters when you’re running large research jobs. That said, if you only scrape a couple of URLs once a month, a simpler tool might be “good enough.” Talk to an automation expert if you want a quick recommendation based on your volume.

Once this is running, Amazon tracking stops being a dreaded spreadsheet chore. You get cleaner data, faster refreshes, and more time for the part that actually matters: making a call based on what changed.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal