Trustpilot to Google Sheets, ad angles ready to use
Reading competitor reviews is useful. Copying them into a spreadsheet, sorting by rating, and trying to “find patterns” while your campaign clock is ticking is the part that breaks you.
This Trustpilot ad angles workflow hits performance marketers first, but small business owners and agency leads feel it too. You get a clean Google Sheet of reviews plus three ready-to-test ad copy variations built from real 1–2 star pain points.
Below, you’ll see exactly what this automation does, what it replaces, and how to adapt it to your niche without turning it into a science project.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Trustpilot to Google Sheets, ad angles ready to use
flowchart LR
subgraph sg0["On form submission - Discover Jobs Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Snapshot Progress"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>On form submission - Discove.."]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request- Post API call .."]
n3@{ icon: "mdi:cog", form: "rounded", label: "Wait - Polling Bright Data", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If - Checking status of Snap..", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request - Getting data .."]
n6@{ icon: "mdi:robot", form: "rounded", label: "Basic LLM Chain", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n8@{ icon: "mdi:message-outline", form: "rounded", label: "Send Summary To Marketers", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Filtering only bad reviews", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "Aggregating all filtered rev..", pos: "b", h: 48 }
n11@{ icon: "mdi:database", form: "rounded", label: "Google Sheets - Adding All R..", pos: "b", h: 48 }
n6 --> n8
n7 -.-> n6
n0 --> n4
n9 --> n10
n3 --> n0
n10 --> n6
n11 --> n9
n1 --> n2
n2 --> n3
n5 --> n11
n4 --> n3
n4 --> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n1 trigger
class n6 ai
class n7 aiModel
class n4,n9 decision
class n11 database
class n0,n2,n5 api
classDef customIcon fill:none,stroke:none
class n0,n1,n2,n5 customIcon
The Problem: Review Research Is Slow, Messy, and Easy to Get Wrong
If you’ve ever built ad angles from competitor reviews, you know the routine. You open Trustpilot, scroll forever, copy a few lines, paste them into a doc, then try to remember which review came from which month. Next comes the “analysis,” which usually means reading 50 complaints and guessing what the real pattern is. Then you write ads from memory. It’s not just time-consuming. It’s inconsistent, and the worst part is you can’t prove why an angle should work because your source material is scattered.
The friction compounds. And once you need to do this weekly (or across multiple competitors), it turns into a recurring tax on your marketing team.
- Copy-pasting reviews into a spreadsheet takes about 1–2 hours per competitor if you want enough data to trust it.
- You end up mixing timeframes, so the “trend” you spotted is really just last week’s noise.
- Manual filtering for 1–2 star reviews is tedious, which means it often gets skipped.
- Ad copy drafts get watered down because the research isn’t organized and nobody wants to reread 100 reviews.
The Solution: Trustpilot → Sheets → OpenAI Drafts You Can Test
This workflow turns competitor review scraping into a repeatable system. You start by submitting a simple form with a competitor’s Trustpilot URL and a timeframe (like 30 days or 12 months). n8n sends that request to Bright Data’s dataset API, then automatically waits and polls until the snapshot is ready. Once the data is available, the workflow retrieves all reviews and appends them into a structured Google Sheet so you can sort, filter, and share it like any other research doc. Then it narrows the focus to 1–2 star reviews, compiles the complaint text, and asks OpenAI (GPT-4o-mini) to summarize the real pain points and draft three ad copy variations based on what people actually hate.
The workflow starts with a form submission in n8n. From there, Bright Data handles the Trustpilot scraping while n8n waits and checks status in the background. Finally, Google Sheets stores the raw material and OpenAI turns it into a summary plus ad angles, which are emailed to your team.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you want to research 3 competitors before writing a new Facebook campaign. Manually, you might spend about 2 hours per competitor: 1 hour collecting reviews, 30 minutes sorting and tagging, and another 30 minutes pulling themes into a brief (so roughly 6 hours total). With this workflow, you submit three form runs that take about 5 minutes each, then wait for Bright Data to return the snapshot in the background. In under an hour of hands-off time, you have the Sheet filled, a summary of recurring complaints, and three ad copy variations emailed to your team.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets to store reviews and keep a swipe file.
- Bright Data to scrape Trustpilot reviews reliably.
- OpenAI API key (get it from your OpenAI dashboard).
Skill level: Intermediate. You’ll connect credentials, paste a Sheet ID, and tweak a prompt, but you won’t be writing code.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A form submission kicks it off. You enter the competitor’s Trustpilot URL and pick a timeframe (30 days through 12 months). That input becomes the “job request” used for scraping.
Bright Data scrapes, while n8n waits and checks status. The workflow triggers a dataset snapshot via HTTP request, pauses using a Wait node, then polls Bright Data until the snapshot is ready. No tab-switching. No manual refresh.
Reviews are stored and cleaned up in Google Sheets. Once results come back, n8n appends them to your template sheet, then filters down to 1–2 star reviews and compiles the text into something an LLM can actually work with.
OpenAI generates the useful part. GPT-4o-mini summarizes common negative feedback and drafts three ad copy variations. n8n then emails the summary and drafts via Gmail so the team can move straight into testing.
You can easily modify the timeframe options or the prompt tone based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Trigger
This workflow starts when a user submits a form with a Trustpilot URL and time frame.
- Add and open Form Intake Trigger.
- Set Form Title to
Please Paste The URL of Your Trustpilot competitor. - In Form Fields, confirm the URL field label is
Competitor TRUSTPILOT URL (include https://www.trsutpilot.com/review/and the dropdown options includeLast 30 days,Last 3 months,Last 6 months,Last 12 months.
Step 2: Connect Bright Data API Requests
These nodes trigger the Bright Data scrape and poll for completion before downloading results.
- Open Trigger Bright Data Job and set URL to
https://api.brightdata.com/datasets/v3/triggerwith MethodPOST. - Set JSON Body to
=[ { "url": "{{ $json['Competitor TRUSTPILOT URL (include https://www.trsutpilot.com/review/'] }}", "date_posted": "{{ $json['Please select the time frame of reviews you\'d like. If it\'s a big brand go with 30 days'] }}" } ]. - In Query Parameters, set dataset_id to
[YOUR_ID]and include_errors totrue. - In Header Parameters, set Authorization to
Bearer [CONFIGURE_YOUR_API_KEY]. - Open Delay for Data Poll and set Unit to
minutesand Amount to2. - Open Fetch Snapshot Status and set URL to
=https://api.brightdata.com/datasets/v3/progress/{{ $('Trigger Bright Data Job').item.json.snapshot_id }}, and keep the Authorization header asBearer [CONFIGURE_YOUR_API_KEY]. - In Snapshot Status Check, confirm the condition uses
={{ $json.status }}equalsrunning. - Open Retrieve Bright Data Results and set URL to
=https://api.brightdata.com/datasets/v3/snapshot/{{ $('Trigger Bright Data Job').item.json.snapshot_id }}with query format set tojsonand the same Authorization header.
[YOUR_ID] and [CONFIGURE_YOUR_API_KEY] are not replaced with valid values.Step 3: Connect Google Sheets
Store retrieved reviews in Google Sheets for recordkeeping and downstream filtering.
- Open Append Reviews to Sheet and confirm Operation is set to
append. - Set Document to
[YOUR_ID]and Sheet togid=0. - Credential Required: Connect your googleSheetsOAuth2Api credentials.
Step 4: Filter and Aggregate Low Ratings
This stage keeps only 1–2 star reviews and compiles review text for summarization.
- Open Filter Low Ratings and ensure conditions use
={{ $json.review_rating }}equals1OR2. - Open Compile Review Text and set Fields To Aggregate to aggregate review_content into Aggregated_reviews.
Step 5: Set Up the AI Summary
Use an LLM to generate insights and ad copy from the aggregated complaints.
- Open LLM Summary Builder and set Prompt Type to
define. - Set Text to
=Read the following bad reviews, these are reviews of our competitors: {{ $json.Aggregated_reviews }} --- After reading them, summarize their weakest points. Don't mention the competitor name. Write 3 different ads copy for our Facebook ads campaign, addressing these concerns. - Open OpenAI Chat Engine and select the model
gpt-4o-mini. - Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine, which powers LLM Summary Builder.
Step 6: Configure the Email Output
Send the summary and full complaint breakdown to your inbox.
- Open Email Marketing Summary and set Send To to
[YOUR_EMAIL]. - Set Subject to
=Summary of Complaints of competitor: {{ $('Form Intake Trigger').item.json['Competitor TRUSTPILOT URL (include https://www.trsutpilot.com/review/'] }}. - Set Message to
=Based on the following Trustpilot page: {{ $('Form Intake Trigger').item.json['Competitor TRUSTPILOT URL (include https://www.trsutpilot.com/review/'] }} Here is a summary of recent complaints including ideas for ad copy: {{ $json.text }} ----------------------------- I'm also attaching a break down of all recent complaints {{ $('Compile Review Text').item.json.Aggregated_reviews }}. - Credential Required: Connect your gmailOAuth2 credentials.
Step 7: Test & Activate
Validate the workflow end-to-end before turning it on.
- Click Execute Workflow, then submit the Form Intake Trigger form with a valid Trustpilot URL.
- Confirm Snapshot Status Check routes back to Delay for Data Poll when
statusisrunning, and to Retrieve Bright Data Results when it completes. - Verify new rows appear in Google Sheets from Append Reviews to Sheet and only 1–2 star reviews continue through Filter Low Ratings.
- Check your inbox for the email from Email Marketing Summary containing a summary and the aggregated complaints.
- Once successful, toggle the workflow to Active for production use.
Common Gotchas
- Bright Data credentials can expire or need specific permissions. If things break, check your Bright Data dataset token and allowed headers first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your accounts are already set up.
No. You’ll connect credentials and paste in your Google Sheet details. The rest is configuration and light prompt editing.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data usage and OpenAI API costs (usually a few cents per run for GPT-4o-mini).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Update the prompt inside the LLM Summary Builder so it uses your offer language, compliance rules, and tone (for example, “premium,” “direct-response,” or “clinical”). You can also change the filter logic to include 3-star reviews if your category has softer complaints, then adjust the OpenAI Chat Engine prompt to output different formats like hooks-only, headline packs, or UGC-style scripts.
Usually it’s an expired or incorrect authorization header. Regenerate your Bright Data token, update it in the HTTP Request nodes, and confirm the dataset API endpoint matches the dataset you’re calling. If it still fails, check snapshot status responses for errors, and make sure your Bright Data plan allows the dataset you’re trying to use.
A lot, as long as your Bright Data plan and your n8n execution limits can handle it.
For this workflow, n8n is a better fit because it can handle polling loops, branching logic, and data shaping without turning into a fragile chain of zaps. Self-hosting is also a big deal if you plan to run this often and don’t want every run counted as a premium task. Zapier or Make can still work if you simplify the job (for example, no polling, fewer transformation steps), but you’ll usually hit limits faster. Frankly, the “scrape + wait + recheck” pattern is where lighter tools get annoying. Talk to an automation expert if you’re not sure which fits.
Once this is running, competitor review research stops being a “someday” task and becomes a button you press. The workflow handles the repetitive stuff. You handle the testing.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.