Bright Data to Google Sheets, briefs ready to write
Content briefs usually fail for one boring reason: the research is scattered. A few tabs here, a competitor outline in a doc there, and some “I’ll come back to this” notes that you honestly won’t.
This hits SEO specialists hardest, but content marketers and agency leads feel it too. With this SERP brief automation, you turn one keyword into a structured brief you can paste into Google Sheets without living in your browser all morning.
You’ll see how the workflow pulls the top Google results, extracts what matters from each page, and has an LLM synthesize it into a clear plan your writers can actually follow.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: Bright Data to Google Sheets, briefs ready to write
flowchart LR
subgraph sg0["When chat message received Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When chat message received", pos: "b", h: 48 }
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>extract url"]
n2@{ icon: "mdi:cog", form: "rounded", label: "Google SERP", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "Limit", pos: "b", h: 48 }
n5@{ icon: "mdi:cog", form: "rounded", label: "Aggregate", pos: "b", h: 48 }
n6@{ icon: "mdi:brain", form: "rounded", label: "OpenRouter Chat Model1", pos: "b", h: 48 }
n7@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat", pos: "b", h: 48 }
n8@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat1", pos: "b", h: 48 }
n9@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat2", pos: "b", h: 48 }
n10@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat3", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat4", pos: "b", h: 48 }
n12@{ icon: "mdi:brain", form: "rounded", label: "OpenRouter Chat Model2", pos: "b", h: 48 }
n13@{ icon: "mdi:robot", form: "rounded", label: "Respond to Chat5", pos: "b", h: 48 }
n14@{ icon: "mdi:cog", form: "rounded", label: "Access and extract data from..", pos: "b", h: 48 }
n15@{ icon: "mdi:brain", form: "rounded", label: "OpenRouter Chat Model", pos: "b", h: 48 }
n16@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n17@{ icon: "mdi:robot", form: "rounded", label: "analyse site", pos: "b", h: 48 }
n18["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>extract html1"]
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>clean html"]
n20@{ icon: "mdi:swap-vertical", form: "rounded", label: "url", pos: "b", h: 48 }
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge1"]
n22@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser1", pos: "b", h: 48 }
n23@{ icon: "mdi:robot", form: "rounded", label: "Analysis", pos: "b", h: 48 }
n24@{ icon: "mdi:robot", form: "rounded", label: "Format Output", pos: "b", h: 48 }
n25@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n20 --> n21
n4 --> n3
n21 --> n9
n23 --> n24
n23 --> n13
n5 --> n10
n19 --> n17
n19 --> n18
n2 --> n1
n1 --> n7
n17 --> n21
n24 --> n11
n25 -.-> n0
n18 --> n21
n3 --> n5
n3 --> n14
n3 --> n20
n7 --> n4
n8 --> n2
n9 --> n3
n10 --> n23
n15 -.-> n17
n6 -.-> n24
n12 -.-> n23
n16 -.-> n17
n22 -.-> n23
n0 --> n8
n14 --> n19
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n7,n8,n9,n10,n11,n13,n16,n17,n22,n23,n24 ai
class n6,n12,n15 aiModel
class n25 ai
class n1,n18,n19 code
class n4 disabled
classDef customIcon fill:none,stroke:none
class n1,n18,n19,n21 customIcon
Why This Matters: SERP Research Is a Time Trap
Writing a “quick” SEO brief rarely stays quick. You Google the keyword, open the top results, skim headings, copy a few sections into a doc, then realize half the pages are paywalled, blocked, or stuffed with junk you don’t want to cite. After that, you still have to answer the hard parts: what’s the real intent, what topics show up on every ranking page, and where can you create something better. By the time you’re done, the brief is inconsistent and your brain is tired from switching contexts.
It adds up fast. Here’s where it breaks down in real life.
- Opening and skimming 10 competitor pages can burn about 2 hours per keyword, especially when you’re trying to be thorough.
- Copy-pasting headings and notes manually invites errors, so the brief ends up missing key sections that “every ranking page” includes.
- Some sites block scrapers and even aggressive browsing, which means you get incomplete data right when you need reliability.
- When briefs aren’t standardized, writers ask more questions and the first draft takes longer to get review-ready.
What You’ll Build: A Keyword-to-Brief Research Pipeline
This workflow acts like an AI content strategist that starts with a simple chat prompt. You enter a target keyword, and it immediately pulls the top 10 organic Google results using Bright Data’s SERP API. Then it loops through each result URL and reliably extracts the full page content using Bright Data’s Web Unblocker, which helps bypass common anti-bot roadblocks. As each page is collected, the workflow cleans the HTML, extracts structure (titles, headings, word count, and a summary), and aggregates everything into one dataset. Finally, an LLM accessed through OpenRouter synthesizes the whole competitive landscape into a strategic content brief that calls out intent, must-cover topics, and content gaps you can use to outrank what’s already ranking.
The workflow begins in a chat interface and uses the keyword as the only required input. From there, it scrapes the SERP, loops through competitor pages to extract structured insights, and then generates a formatted brief. You can paste that brief straight into Google Sheets (or Excel) so the whole team can work from one source.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you create 5 SEO briefs a week. Manually, reviewing the top 10 results usually means opening 10 pages, skimming, copying headings, and summarizing, which is often about 2 hours per brief (so roughly 10 hours weekly). With this workflow, you submit the keyword in chat (a minute), let scraping and processing run for a bit, then paste the formatted output into Google Sheets in about 10 minutes. You still review and adjust, but you get most of your week back.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data for SERP API and Web Unblocker scraping.
- OpenRouter to access an LLM for synthesis.
- Bright Data API key (get it from your Bright Data dashboard).
Skill level: Intermediate. You will connect credentials, test a run, and tweak a prompt, but you won’t be writing an app.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A chat message starts the run. You enter a target keyword in the workflow’s chat interface, which kicks off the SERP analysis and keeps the whole process easy for non-technical teammates.
Bright Data pulls the SERP and competitor URLs. The workflow fetches the top 10 organic results, then a code step derives and prepares the links so they can be processed one-by-one.
Each page is scraped and converted into structured “signals.” A loop processes every URL, uses Web Unblocker to extract HTML reliably, then cleans it and pulls out useful fields like headings, word count, and summaries.
OpenRouter turns the dataset into a brief you can paste into Sheets. The workflow aggregates all 10 pages into one bundle, asks the LLM for intent, topic coverage, and gaps, then formats the response into a clean report for Google Sheets (or Excel) and sends it back in chat.
You can easily modify the output format to match your brief template in Sheets, or adjust the number of SERP results if you want more or less competitor coverage. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Chat Trigger
Set up the entry point for user input and session memory so the workflow can receive keywords and maintain context.
- Add and open Chat Intake Trigger.
- Set Public to
true. - Set Initial Messages to
Welcome. I am your AI Content Strategist. Provide a target keyword, and I will analyze the top 10 search results to generate a detailed content plan designed to outrank the competition.. - In Options, confirm: Title is
SEO Content Strategist, Subtitle isGenerate a strategic content brief based on SERP analysis., Response Mode isresponseNodes, Input Placeholder isEnter your target keyword..., and Load Previous Session ismemory. - Connect Session Memory Buffer to Chat Intake Trigger using the ai_memory connection.
off to avoid confusing session behavior.Step 2: Connect Bright Data
These nodes fetch SERP data and page HTML from Google results using Bright Data.
- Open Fetch SERP Results and set URL to
=https://www.google.com/search?q={{ encodeURIComponent($json.chatInput) }}&num=10&brd_json=1. - Set Zone to
serp_api1and Country tousin Fetch SERP Results. - Credential Required: Connect your brightdataApi credentials in Fetch SERP Results.
- Open Fetch Page Content and set URL to
={{ $json.link }}. - Set Zone to
web_unlocker1and Country tousin Fetch Page Content. - Credential Required: Connect your brightdataApi credentials in Fetch Page Content.
Step 3: Set Up SERP Processing and Page Scraping
Extract result links, batch through pages, scrape HTML, and consolidate insights.
- Configure Notify SERP Analysis with Message set to
=Processing: Analyzing top 10 Google results for "{{ $json.chatInput }}".. - In Derive Result Links, keep the provided JavaScript Code that splits the
organicarray into individual items. - In Notify Extraction Start, set Message to
Starting content extraction from top-ranking pages.. - If you want to limit scraped pages, enable Apply Item Cap (currently disabled) and set a limit.
- Keep Iterate Records Batch connected so it routes items to both Fetch Page Content and Set Page URL, while the first output continues to Combine Scrape Data.
- In Set Page URL, set the url assignment to
={{ $json.link }}. - In Notify Page Scrape, set Message to
=Scraped {{ $json.url }}. - In Combine Scrape Data, set Aggregate to
aggregateAllItemData.
Step 4: Configure AI Analysis and Output Messages
Clean HTML, analyze pages, synthesize strategy, and deliver chat outputs.
- In Sanitize Page HTML, keep the provided JavaScript Code that strips scripts, styles, and noise.
- Sanitize Page HTML outputs to both Analyze Page Content and Extract Page Fields in parallel.
- In Analyze Page Content, set Text to
={{ $json.cleanedHtml }}and keep Has Output Parser enabled. - Ensure Router Model Site is connected as the language model for Analyze Page Content. Credential Required: Connect your openRouterApi credentials in Router Model Site.
- Ensure Parse Site Summary is connected as the output parser for Analyze Page Content. Add credentials to Router Model Site (not the parser).
- In Extract Page Fields, keep the provided JavaScript Code that extracts titles, meta descriptions, headings, and word counts.
- In Merge Page Insights, set Number of Inputs to
3to combine Set Page URL, Extract Page Fields, and Analyze Page Content. - In Notify Synthesis Begin, set Message to
All data collected. Synthesizing insights and generating your strategic content plan.. - In Strategic Analysis, set Text to
=target keyword : {{ $('Chat Intake Trigger').item.json.chatInput }} serp synthesis: {{ $items("Fetch SERP Results").map(item => JSON.stringify(item.json, null, 2)).join('\n\n---\n\n') }} top 10 pages extract: {{ $items("Combine Scrape Data").map(item => JSON.stringify(item.json, null, 2)).join('\n\n---\n\n') }}and keep Has Output Parser enabled. - Ensure Router Model Analyst is connected as the language model for Strategic Analysis. Credential Required: Connect your openRouterApi credentials in Router Model Analyst.
- Ensure Parse Strategy Output is connected as the output parser for Strategic Analysis. Add credentials to Router Model Analyst (not the parser).
- Strategic Analysis outputs to both Format Strategy Brief and Send Raw Analysis in parallel.
- In Format Strategy Brief, set Text to
=targeted Keyword: {{ $('Chat Intake Trigger').item.json.chatInput }} Analysis: {{ $items("Strategic Analysis").map(item => JSON.stringify(item.json, null, 2)).join('\n\n---\n\n') }} {{$json}}. - Ensure Router Model Summarizer is connected as the language model for Format Strategy Brief. Credential Required: Connect your openRouterApi credentials in Router Model Summarizer.
- Configure the chat output nodes with their message expressions: Send Raw Analysis uses
={{ $json.text }}and Send Final Brief uses={{ $json.text }}. Group other chat status nodes (Notify SERP Analysis, Notify Extraction Start, Notify Page Scrape, Notify Synthesis Begin) to keep user feedback consistent.
Step 5: Test and Activate Your Workflow
Validate the end-to-end flow before going live.
- Click Execute Workflow and submit a sample keyword through Chat Intake Trigger.
- Confirm chat status messages appear in order and that Fetch SERP Results returns 10 results.
- Verify Sanitize Page HTML produced
cleanedHtmland that both Analyze Page Content and Extract Page Fields ran in parallel. - Check that Strategic Analysis generates structured JSON and that Format Strategy Brief sends a formatted response via Send Final Brief.
- When satisfied, toggle the workflow to Active for production use.
Troubleshooting Tips
- Bright Data credentials can expire or be tied to the wrong Zone. If things break, check your Bright Data dashboard (SERP API and Web Unblocker activation) and then reselect the credential inside the Bright Data nodes.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Quick Answers
About 30 minutes if your Bright Data and OpenRouter accounts are ready.
No. You’ll connect credentials and adjust a couple of prompts and output fields.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data usage plus OpenRouter model/API costs per run.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Most teams tweak the “Apply Item Cap” node to scrape fewer (or more) results, then adjust the OpenRouter analysis prompt to match their brief template (tone, required sections, and brand rules). You can also change the output formatting step so it maps cleanly into Google Sheets columns instead of a single Markdown block.
Usually it’s an API key issue or a Zone that wasn’t enabled for SERP API or Web Unblocker. Recheck product activation in your Bright Data dashboard, then update the credential in n8n. If it fails only on certain sites, you may be hitting a block that needs different unblocker settings or a slower batch size.
For most teams, dozens of briefs per day is realistic if your Bright Data and OpenRouter limits allow it, and you keep the run to the top 10 results.
Often, yes. This workflow depends on looping, aggregation, and a multi-step scrape-and-summarize chain, which is where n8n tends to feel more flexible (and less expensive at higher volume). Zapier or Make can work if you simplify the problem, but they get awkward when you need to process 10 URLs, clean HTML, and then pass a combined dataset to an LLM. n8n also gives you the option to self-host, which matters when you want unlimited executions and more control over data handling. If you’re unsure, Talk to an automation expert and we’ll help you pick the right approach.
Once this is running, you stop “researching” the same SERP over and over. You get a repeatable brief factory, and your team gets to spend their time writing the part that actually matters.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.