Decodo + Google Sheets: keyword themes, cleaned up
Keyword research gets messy fast. You grab a few SERP snippets, paste them into a doc, open three “related searches” tabs, then realize you’ve collected the same ideas twice and missed the good stuff.
This is what SEO leads complain about in Slack at 4pm. Content strategists feel it when briefs go out thin. And a marketing manager running a small team just wants keyword theme automation that produces clean clusters without babysitting spreadsheets.
This workflow uses Decodo to pull live Google results, then GPT-4.1-mini extracts topics and writes structured themes into Google Sheets (or a data table you can export). You’ll see how it works, what it replaces, and how to avoid the usual setup traps.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Decodo + Google Sheets: keyword themes, cleaned up
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set the Input Fields", pos: "b", h: 48 }
n2@{ icon: "mdi:code-braces", form: "rounded", label: "Create a Binary Response", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "Write the Structured Google ..", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "Google Search with Decodo", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n6@{ icon: "mdi:robot", form: "rounded", label: "Extract Keywords and Topics", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model for Keywor..", pos: "b", h: 48 }
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Return the Organic Search Re.."]
n9@{ icon: "mdi:cog", form: "rounded", label: "Insert row on Data Table", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "If row exists", pos: "b", h: 48 }
n11@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n11 --> n5
n11 --> n9
n10 --> n11
n5 --> n6
n1 --> n4
n2 --> n3
n9 --> n5
n4 --> n8
n4 --> n2
n6 --> n10
n8 --> n5
n0 --> n1
n7 -.-> n6
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n6 ai
class n7 aiModel
class n11 decision
class n2,n8 code
classDef customIcon fill:none,stroke:none
class n8 customIcon
The Challenge: Clean Keyword Themes Without Spreadsheet Chaos
Manual SERP-based research looks simple until you do it at volume. You search a term, skim ten results, copy snippets, and try to “mentally cluster” what you’re seeing into themes for a brief. Then the next keyword comes in, and you repeat everything. It’s slow, but worse, it’s inconsistent. Two people can pull the same SERP and walk away with totally different themes, which means briefs drift and content plans become a guessing game. Miss one intent angle and you’ll feel it later in rewrites, stakeholder questions, and pages that never quite rank.
It adds up fast. Here’s where it breaks down in real life.
- You end up re-reading the same competitor pages because your notes aren’t deduplicated by URL.
- Copy-pasting snippets into a sheet invites small errors that turn into bad themes and weak briefs.
- Traditional keyword tools often miss context, so you don’t see entities and related topics baked into the SERP language.
- When you need to refresh research weekly, the task quietly steals a half day from someone’s calendar.
The Fix: Live SERP Scraping + AI Topic Extraction Into Sheets
This automation turns a single search query into a structured set of keyword themes you can actually use. You enter the keyword and target geography (for example, “Pizza” in “India”), and Decodo scrapes the live Google organic results. The workflow then loops through each result snippet and uses GPT-4.1-mini to extract meaningful keywords, entities, and topics from the text. After that, it checks if the URL already exists in your dataset so you don’t store duplicates. New items get inserted, repeats get skipped, and you end up with clean rows ready for briefing, clustering, trend tracking, or export into Google Sheets.
The workflow starts with input fields for query and geo, then pulls live SERP results via Decodo. AI extracts themes from each organic snippet and stores them only if they’re new. Finally, the raw Decodo response is saved as a JSON file so you always have an audit trail when someone asks, “Where did this come from?”
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you research 10 keywords for a new content cluster. Manually, you’ll usually spend about 10 minutes per keyword opening results, copying snippets, and trying to group themes, which is roughly 2 hours (and that’s on a focused day). With this workflow, you submit the 10 queries, let Decodo fetch the SERPs, and GPT-4.1-mini extracts topics while you do other work. You’ll still review the sheet, but the “blank page” part is gone, and dedupe happens automatically.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Decodo (community node) to scrape live Google results.
- OpenAI to extract topics with GPT-4.1-mini.
- Google Sheets to store themes for briefs and planning.
- Decodo API Key (get it from your Decodo dashboard).
- OpenAI API Key (get it from the OpenAI API settings page).
Skill level: Intermediate. You’ll connect credentials, install a community node (self-hosted), and edit a couple of input fields safely.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
You define the query and geo. A Set node holds your search_query and target region so you can reuse the workflow without rebuilding anything.
Decodo pulls the live SERP. The Decodo Search Request fetches Google organic results for that exact query, which means you’re working off real-world language, not a stale keyword database.
AI extracts topics from each snippet. The workflow loops through the organic results, and the GPT-4.1-mini analyzer turns each snippet into structured keywords, entities, and themes you can sort and cluster later.
Duplicates get filtered, then results are stored. A data-table lookup checks if the URL is already present, inserts only new records, and saves the raw Decodo response to a JSON file for traceability.
You can easily modify the output destination from an n8n Data Table to Google Sheets (or even Excel 365) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow with a manual run so you can validate inputs and downstream processing.
- Add Manual Launch Trigger as your starting node (this workflow already uses it).
- Connect Manual Launch Trigger to Define Input Variables.
Step 2: Connect Decodo Search and Define Inputs
Define your search inputs and configure the Decodo API request that powers the search data.
- In Define Input Variables, set search_query to
Pizzaand geo toIndia. - Open Decodo Search Request and set Operation to
google_search. - Set Query to
={{ $json.search_query }}and Geo to={{ $json.geo }}. - Set Locale to
en, Headless tofalse, and Markdown totrue. - Credential Required: Connect your decodoApi credentials in Decodo Search Request.
Step 3: Extract Organic Results and Prepare File Output
Process search results and save the raw response to a JSON file for auditability.
- In Extract Organic Results, keep the JavaScript code set to
return $input.first().json.results[0].content.results.results.organicto isolate organic results. - In Build Binary Payload, keep the function code that converts the JSON to base64 binary.
- In Save Search JSON File, set Operation to
writeand File Name to=C:\\{{ $('Define Input Variables').item.json.search_query }}.json. - Note the parallel execution: Decodo Search Request outputs to both Extract Organic Results and Build Binary Payload in parallel.
Step 4: Set Up AI Enrichment and Data Table Storage
Iterate through each organic result, extract keywords and topics, and store only new URLs.
- In Iterate Result Items, leave options default and ensure it receives data from Extract Organic Results.
- In Derive Keywords & Themes, set Text to
=Extract keywords and topics of the following content \n\nDescription - {{ $json.desc }}and keep the manual schema as provided. - Credential Required: Connect your openAiApi credentials in OpenAI Topic Analyzer. This is the language model for Derive Keywords & Themes; do not add credentials to the extractor node.
- In Check Table Row Exists, set the filter url to
={{ $('Iterate Result Items').item.json.url }}and select your data tableDecodoGoogleSearchResults. - In Conditional Branch Check, keep the condition
={{!$json.output.isEmpty() }}to route only new rows to insertion. - In Insert Table Record, map url to
={{ $('Iterate Result Items').item.json.url }}, topics to={{ $('Derive Keywords & Themes').item.json.output.topics.toJsonString() }}, and keywords to={{ $('Derive Keywords & Themes').item.json.output.keywords.toJsonString() }}.
Step 5: Test and Activate Your Workflow
Validate the full pipeline from search to enrichment and storage before turning it on.
- Click Execute Workflow from Manual Launch Trigger to run a test.
- Confirm that Decodo Search Request returns results and that Save Search JSON File writes a file to the path
C:\{search_query}.json. - Verify that Iterate Result Items loops through each organic result and that Insert Table Record only inserts new URLs.
- When everything looks correct, toggle the workflow to Active for production use.
Watch Out For
- Decodo credentials can expire or need specific permissions. If things break, check the Decodo credentials entry in n8n first, then confirm your API key is still active in the Decodo dashboard.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About an hour if you already have your Decodo and OpenAI keys.
Yes, but you’ll want one person who’s comfortable editing fields and testing runs. No coding is required, though installing the Decodo community node is the one “techy” part.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage (often a few dollars a month at typical marketing volumes) plus your Decodo plan.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can swap the AI model in the OpenAI Chat Model node (gpt-4.1-mini to something else) and adjust the extraction schema in the “Derive Keywords & Themes” step. Common tweaks include adding fields like “intent,” “content angle,” or “page type,” changing geo/locale parameters in the Decodo search, and writing results directly to Google Sheets instead of the n8n Data Table.
Usually it’s an invalid or expired Decodo API key saved in n8n credentials. It can also be the community node not being installed correctly on your self-hosted instance, or your Decodo plan blocking the specific endpoint you’re calling. If it fails only on some runs, check for rate limits or temporary Google response issues and rerun the same query.
If you self-host, there’s no execution limit; it mostly depends on your server and API rate limits.
Often, yes, frankly. This workflow relies on a Decodo community node and a loop + dedupe pattern (split-in-batches, table lookup, conditional insert), which is doable in Zapier/Make but can get clunky and pricey as volume grows. n8n also makes it easier to keep an audit trail by saving the raw SERP JSON alongside the structured rows. If you only need “keyword in, row out” with minimal logic, Make can be quick. But for ongoing research where you care about dedupe and structure, n8n is the better fit most of the time. Talk to an automation expert if you want help choosing.
Once this is running, keyword themes stop being a weekly scramble. You get clean, deduped outputs you can trust, and you can spend your time shaping strategy instead of cleaning tabs.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.