SE Ranking + OpenAI: intent-ready FAQ research
FAQ research gets messy fast. You pull questions from one place, answers from another, then spend an afternoon trying to remember where each idea came from and what it’s “for.”
SEO strategists feel it when a content plan needs proof, not guesses. Marketing leads feel it when “AI Search” starts showing up in stakeholder questions. And writers get stuck rewriting the same explanations because there is no clean, intent-labeled source of truth. This SE Ranking FAQs automation gives you a structured dataset you can actually use.
You’ll set up an n8n workflow that pulls real AI search prompts from SE Ranking, turns them into Q&A pairs, classifies intent with OpenAI, and exports a tidy JSON file (sources included).
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: SE Ranking + OpenAI: intent-ready FAQ research
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set the Input Fields", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>SE Ranking Prompts by Target"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract All Links"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract QnA"]
n5@{ icon: "mdi:robot", form: "rounded", label: "AI QnA Zeroshot Classifier", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract Questions Only"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract Prompts"]
n8@{ icon: "mdi:cog", form: "rounded", label: "Final Data Aggregation", pos: "b", h: 48 }
n9@{ icon: "mdi:cog", form: "rounded", label: "Custom Data Aggregation", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Perform Custom Data Merge"]
n11@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model for Zerosh..", pos: "b", h: 48 }
n12@{ icon: "mdi:cog", form: "rounded", label: "Write File to Disk", pos: "b", h: 48 }
n13@{ icon: "mdi:code-braces", form: "rounded", label: "Create a Binary Data", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Enriched Data"]
n4 --> n10
n7 --> n10
n3 --> n10
n14 --> n8
n13 --> n12
n1 --> n2
n6 --> n5
n6 --> n10
n8 --> n13
n9 --> n14
n10 --> n9
n5 --> n14
n2 --> n3
n2 --> n4
n2 --> n6
n2 --> n7
n0 --> n1
n11 -.-> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n5 ai
class n11 aiModel
class n2 api
class n3,n4,n6,n7,n13 code
classDef customIcon fill:none,stroke:none
class n2,n3,n4,n6,n7,n10,n14 customIcon
Why This Matters: FAQ Research That Doesn’t Fall Apart
“Let’s add an FAQ section” sounds simple until you try to do it with any rigor. Real questions live in AI search prompts, SERP features, support tickets, competitor pages, and random docs. You can absolutely copy-paste your way through it, but then you hit the second problem: you can’t explain why a question belongs on a page, what intent it serves, or where the insight came from. That’s when teams start shipping thin FAQs that don’t rank, don’t help users, and don’t stand up to scrutiny.
It adds up fast. Here’s where it breaks down.
- You end up with a Google Doc full of questions, but no consistent structure for answers, sources, or intent.
- Manual sorting into buckets like “how-to” vs “pricing” turns into opinion wars instead of repeatable logic.
- Sources get lost, so you can’t validate or revisit the underlying AI search prompt later.
- Refreshing FAQs becomes a quarterly slog, which means your content drifts while competitors keep updating.
What You’ll Build: An Intent-Classified FAQ Dataset From Real AI Search
This workflow turns AI search behavior into something you can plan with. You start by entering a target domain, region/source, and a few filters (like keyword includes/excludes and result limits). n8n then calls the SE Ranking API to fetch AI Search prompts tied to that domain. From those prompts, the workflow extracts candidate questions, builds Q&A pairs, and collects reference links so every item has a trail back to the source. Next, OpenAI runs a zero-shot classifier to label each Q&A with an intent category (HOW_TO, DEFINITION, PRICING, and more) and a confidence score. Finally, everything is merged and exported as a structured JSON file you can use for SEO pages, briefs, documentation, or a knowledge base.
The workflow starts with a single run (Manual Start) and parameter inputs. After SE Ranking data is pulled, code steps isolate questions, assemble answers, and attach reference links. OpenAI then classifies intent and the workflow aggregates everything into one clean JSON dataset ready to share or automate further.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you’re building FAQs for 10 core pages and you want about 20 solid questions per page. Manually, you might spend about 10 minutes per question gathering context, writing a first-pass answer, and pasting sources, which is roughly 30+ hours of work. With this workflow, you can pull and classify a batch in one run (often 5–10 minutes), then spend your time reviewing and editing instead of collecting. Most teams get an initial “good enough” dataset in a single afternoon, not a full week.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- SE Ranking for AI Search prompt data access.
- OpenAI to classify FAQ intent and confidence.
- SE Ranking API key (get it from your SE Ranking account/API settings).
Skill level: Intermediate. You’ll paste API keys, adjust a few fields, and run test executions.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
You define the target and filters. In the input fields node, you set the domain, region/source, and any keyword include/exclude rules so the pull matches the exact niche you care about.
SE Ranking data gets fetched and unpacked. n8n calls the SE Ranking AI Search endpoint, then separates raw prompts, reference links, and question candidates so they can be processed cleanly instead of as one giant blob.
OpenAI classifies intent (and confidence). The extracted Q&A items are sent through a zero-shot classifier backed by an OpenAI chat model, which tags each item as HOW_TO, DEFINITION, PRICING, and other intent types your team can act on.
A structured JSON file is produced. Everything gets merged and aggregated into a final dataset, converted into a binary payload, and written to disk as a JSON export you can plug into analysis, content planning, or publishing workflows.
You can easily modify the intent taxonomy to match your site (for example, splitting “PRICING” into “COST” vs “PLANS”) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Set up the manual trigger to start the workflow for testing and on-demand runs.
- Add the Manual Start Trigger node as the workflow entry point.
- Connect Manual Start Trigger to Assign Input Parameters.
Step 2: Configure Input Parameters
Define the target site and query parameters that will be sent to the SERanking API.
- Open Assign Input Parameters and add the following assignments:
- Set target_site to
seranking.com, engine toai-mode, source tous, and scope todomain. - Set sort to
volume, sort_order todesc, offset to0, and limit to100. - Set multi_keyword_included to
[ [ { "type": "contains", "value": "seo" } ] ]. - Set multi_keyword_excluded to
[ [ { "type": "contains", "value": "seo" }, { "type": "contains", "value": "tools" } ], [ { "type": "contains", "value": "backlinks" } ] ].
Step 3: Connect SERanking API and Fetch Prompts
Configure the API call to fetch prompts and route the output into parallel processing branches.
- Open Fetch SERanking Prompts and set URL to
https://api.seranking.com/v1/ai-search/prompts-by-target. - Enable Send Query and map query parameters to the assigned input values:
- Set target to
{{ $json.target_site }}, scope to{{ $json.scope }}, source to{{ $json.source }}, engine to{{ $json.engine }}. - Set sort to
{{ $json.sort }}, sort_order to{{ $json.sort_order }}, offset to{{ $json.offset }}, limit to{{ $json.limit }}. - Set filter[multi_keyword_included] to
{{ $json.multi_keyword_included }}and filter[multi_keyword_excluded] to{{ $json.multi_keyword_excluded }}. - Credential Required: Connect your httpHeaderAuth credentials in Fetch SERanking Prompts (the node is configured to use
genericCredentialTypewithhttpHeaderAuth). - Confirm the parallel routing: Fetch SERanking Prompts outputs to Parse Reference Links, Build QnA Pairs, Isolate Questions, and Collect Raw Prompts in parallel.
httpBearerAuth and connect that credential type.Step 4: Process and Merge Prompt Data Streams
Extract questions, reference links, QnA pairs, and raw prompts, then aggregate them into a combined dataset.
- Review code nodes to ensure each output structure is correct:
- Isolate Questions outputs
{ questions }from prompts. - Parse Reference Links outputs
{ urls }filtered to validhttplinks. - Build QnA Pairs outputs
{ qna }with question/answer pairs. - Collect Raw Prompts outputs
{ prompts }for full raw data retention. - Confirm that Isolate Questions outputs to both ZeroShot QnA Classifier and Combine Custom Streams in parallel.
- Set Combine Custom Streams to merge 4 inputs with Number Inputs set to
4. - In Aggregate Custom Data, set Aggregate to
aggregateAllItemDataand Destination Field Name tocustom_aggregate.
Step 5: Configure AI Classification and Enrichment
Classify questions into intent categories using the zero-shot classifier and merge the results with custom data.
- Open ZeroShot QnA Classifier and keep the Text prompt as provided, ensuring the questions expression remains
{{ $json.questions.toJsonString() }}. - Verify the Schema Type is
manualand the Input Schema defines the expected fields (id,question,category,confidence). - Ensure OpenAI Chat Model is connected to ZeroShot QnA Classifier as the language model.
- Credential Required: Connect your openAiApi credentials in OpenAI Chat Model (credentials are added to the parent model node, not the classifier).
- Confirm that ZeroShot QnA Classifier outputs to Merge Enriched Output, and Aggregate Custom Data also outputs to Merge Enriched Output.
- In Aggregate Final Dataset, set Aggregate to
aggregateAllItemDatato combine enriched results.
Step 6: Create Output File
Convert the final dataset into a binary payload and save it as a JSON file.
- In Create Binary Payload, keep the Function Code that serializes JSON and base64-encodes it for binary output.
- Open Write JSON File and set Operation to
write. - Set File Name to
=C:\\SERanking_PromptFAQ.json. - Set Data Property Name to
=data.
C:\ or update the path to a valid directory.Step 7: Test and Activate Your Workflow
Run the workflow manually, verify the output, and activate it for production use.
- Click Execute Workflow from Manual Start Trigger to run a test.
- Check that Fetch SERanking Prompts returns data and that each parallel branch produces its expected output.
- Verify Merge Enriched Output and Aggregate Final Dataset contain both classification results and custom aggregates.
- Confirm Write JSON File creates
SERanking_PromptFAQ.jsonwith valid JSON content. - When satisfied, switch the workflow to Active for production use.
Troubleshooting Tips
- SE Ranking credentials can expire or need specific permissions. If things break, check your SE Ranking API key status and header auth format in the HTTP Request node first.
- If you’re using Wait nodes or external processing, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Quick Answers
About 30 minutes if your API keys are ready.
No. You’ll mostly configure credentials and edit a few input fields. The “code” nodes are already built into the workflow.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (often a few cents per batch, depending on how many questions you classify).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you probably should. You can change the intent categories in the ZeroShot QnA Classifier node, tighten the niche by editing include/exclude filters in the Assign Input Parameters node, and swap the Write JSON File step for Google Sheets, a webhook, or a CMS output. A common tweak is filtering out low-confidence items before export so your writers only see “ready” questions.
Usually it’s the header auth format. SE Ranking expects a Token value (with a space) followed by your API key, and a missing space can break the request. Also check that the key is active and permitted for the AI Search endpoint. If it works in SE Ranking but not in n8n, inspect the HTTP Request node’s last response for a clear error message.
It depends mostly on your SE Ranking limits and how many items you request per run, but dozens to a few hundred questions in a batch is realistic.
Often, yes, because this isn’t just “move data from A to B.” You’re merging streams, transforming JSON, running a classifier, and exporting a structured dataset, which is the kind of multi-step logic n8n handles comfortably. Self-hosting also matters if you want to run big refreshes without watching task limits. Zapier or Make can still work if your version is very light (for example, pull data, send to Sheets), but you’ll usually hit complexity walls once you add enrichment and aggregation. Talk to an automation expert if you want help choosing.
Once this is running, FAQ research stops being a one-off project and becomes a repeatable dataset you can refresh whenever priorities change. Honestly, that’s the difference between “we should add FAQs” and a system your team can scale.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.