🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

SE Ranking + OpenAI: intent-ready FAQ research

Lisa Granqvist Partner Workflow Automation Expert

FAQ research gets messy fast. You pull questions from one place, answers from another, then spend an afternoon trying to remember where each idea came from and what it’s “for.”

SEO strategists feel it when a content plan needs proof, not guesses. Marketing leads feel it when “AI Search” starts showing up in stakeholder questions. And writers get stuck rewriting the same explanations because there is no clean, intent-labeled source of truth. This SE Ranking FAQs automation gives you a structured dataset you can actually use.

You’ll set up an n8n workflow that pulls real AI search prompts from SE Ranking, turns them into Q&A pairs, classifies intent with OpenAI, and exports a tidy JSON file (sources included).

How This Automation Works

Here’s the complete workflow you’ll be setting up:

n8n Workflow Template: SE Ranking + OpenAI: intent-ready FAQ research

Why This Matters: FAQ Research That Doesn’t Fall Apart

“Let’s add an FAQ section” sounds simple until you try to do it with any rigor. Real questions live in AI search prompts, SERP features, support tickets, competitor pages, and random docs. You can absolutely copy-paste your way through it, but then you hit the second problem: you can’t explain why a question belongs on a page, what intent it serves, or where the insight came from. That’s when teams start shipping thin FAQs that don’t rank, don’t help users, and don’t stand up to scrutiny.

It adds up fast. Here’s where it breaks down.

  • You end up with a Google Doc full of questions, but no consistent structure for answers, sources, or intent.
  • Manual sorting into buckets like “how-to” vs “pricing” turns into opinion wars instead of repeatable logic.
  • Sources get lost, so you can’t validate or revisit the underlying AI search prompt later.
  • Refreshing FAQs becomes a quarterly slog, which means your content drifts while competitors keep updating.

What You’ll Build: An Intent-Classified FAQ Dataset From Real AI Search

This workflow turns AI search behavior into something you can plan with. You start by entering a target domain, region/source, and a few filters (like keyword includes/excludes and result limits). n8n then calls the SE Ranking API to fetch AI Search prompts tied to that domain. From those prompts, the workflow extracts candidate questions, builds Q&A pairs, and collects reference links so every item has a trail back to the source. Next, OpenAI runs a zero-shot classifier to label each Q&A with an intent category (HOW_TO, DEFINITION, PRICING, and more) and a confidence score. Finally, everything is merged and exported as a structured JSON file you can use for SEO pages, briefs, documentation, or a knowledge base.

The workflow starts with a single run (Manual Start) and parameter inputs. After SE Ranking data is pulled, code steps isolate questions, assemble answers, and attach reference links. OpenAI then classifies intent and the workflow aggregates everything into one clean JSON dataset ready to share or automate further.

What You’re Building

Expected Results

Say you’re building FAQs for 10 core pages and you want about 20 solid questions per page. Manually, you might spend about 10 minutes per question gathering context, writing a first-pass answer, and pasting sources, which is roughly 30+ hours of work. With this workflow, you can pull and classify a batch in one run (often 5–10 minutes), then spend your time reviewing and editing instead of collecting. Most teams get an initial “good enough” dataset in a single afternoon, not a full week.

Before You Start

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • SE Ranking for AI Search prompt data access.
  • OpenAI to classify FAQ intent and confidence.
  • SE Ranking API key (get it from your SE Ranking account/API settings).

Skill level: Intermediate. You’ll paste API keys, adjust a few fields, and run test executions.

Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).

Step by Step

You define the target and filters. In the input fields node, you set the domain, region/source, and any keyword include/exclude rules so the pull matches the exact niche you care about.

SE Ranking data gets fetched and unpacked. n8n calls the SE Ranking AI Search endpoint, then separates raw prompts, reference links, and question candidates so they can be processed cleanly instead of as one giant blob.

OpenAI classifies intent (and confidence). The extracted Q&A items are sent through a zero-shot classifier backed by an OpenAI chat model, which tags each item as HOW_TO, DEFINITION, PRICING, and other intent types your team can act on.

A structured JSON file is produced. Everything gets merged and aggregated into a final dataset, converted into a binary payload, and written to disk as a JSON export you can plug into analysis, content planning, or publishing workflows.

You can easily modify the intent taxonomy to match your site (for example, splitting “PRICING” into “COST” vs “PLANS”) based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Manual Trigger

Set up the manual trigger to start the workflow for testing and on-demand runs.

  1. Add the Manual Start Trigger node as the workflow entry point.
  2. Connect Manual Start Trigger to Assign Input Parameters.
Use the manual trigger while building the workflow so you can run it and inspect data at each node.

Step 2: Configure Input Parameters

Define the target site and query parameters that will be sent to the SERanking API.

  1. Open Assign Input Parameters and add the following assignments:
  2. Set target_site to seranking.com, engine to ai-mode, source to us, and scope to domain.
  3. Set sort to volume, sort_order to desc, offset to 0, and limit to 100.
  4. Set multi_keyword_included to [ [ { "type": "contains", "value": "seo" } ] ].
  5. Set multi_keyword_excluded to [ [ { "type": "contains", "value": "seo" }, { "type": "contains", "value": "tools" } ], [ { "type": "contains", "value": "backlinks" } ] ].
⚠️ Common Pitfall: Keep the JSON formatting in multi_keyword_included and multi_keyword_excluded exactly as shown to avoid API validation errors.

Step 3: Connect SERanking API and Fetch Prompts

Configure the API call to fetch prompts and route the output into parallel processing branches.

  1. Open Fetch SERanking Prompts and set URL to https://api.seranking.com/v1/ai-search/prompts-by-target.
  2. Enable Send Query and map query parameters to the assigned input values:
  3. Set target to {{ $json.target_site }}, scope to {{ $json.scope }}, source to {{ $json.source }}, engine to {{ $json.engine }}.
  4. Set sort to {{ $json.sort }}, sort_order to {{ $json.sort_order }}, offset to {{ $json.offset }}, limit to {{ $json.limit }}.
  5. Set filter[multi_keyword_included] to {{ $json.multi_keyword_included }} and filter[multi_keyword_excluded] to {{ $json.multi_keyword_excluded }}.
  6. Credential Required: Connect your httpHeaderAuth credentials in Fetch SERanking Prompts (the node is configured to use genericCredentialType with httpHeaderAuth).
  7. Confirm the parallel routing: Fetch SERanking Prompts outputs to Parse Reference Links, Build QnA Pairs, Isolate Questions, and Collect Raw Prompts in parallel.
If SERanking requires a bearer token instead, switch the node’s authentication to httpBearerAuth and connect that credential type.

Step 4: Process and Merge Prompt Data Streams

Extract questions, reference links, QnA pairs, and raw prompts, then aggregate them into a combined dataset.

  1. Review code nodes to ensure each output structure is correct:
  2. Isolate Questions outputs { questions } from prompts.
  3. Parse Reference Links outputs { urls } filtered to valid http links.
  4. Build QnA Pairs outputs { qna } with question/answer pairs.
  5. Collect Raw Prompts outputs { prompts } for full raw data retention.
  6. Confirm that Isolate Questions outputs to both ZeroShot QnA Classifier and Combine Custom Streams in parallel.
  7. Set Combine Custom Streams to merge 4 inputs with Number Inputs set to 4.
  8. In Aggregate Custom Data, set Aggregate to aggregateAllItemData and Destination Field Name to custom_aggregate.

Step 5: Configure AI Classification and Enrichment

Classify questions into intent categories using the zero-shot classifier and merge the results with custom data.

  1. Open ZeroShot QnA Classifier and keep the Text prompt as provided, ensuring the questions expression remains {{ $json.questions.toJsonString() }}.
  2. Verify the Schema Type is manual and the Input Schema defines the expected fields (id, question, category, confidence).
  3. Ensure OpenAI Chat Model is connected to ZeroShot QnA Classifier as the language model.
  4. Credential Required: Connect your openAiApi credentials in OpenAI Chat Model (credentials are added to the parent model node, not the classifier).
  5. Confirm that ZeroShot QnA Classifier outputs to Merge Enriched Output, and Aggregate Custom Data also outputs to Merge Enriched Output.
  6. In Aggregate Final Dataset, set Aggregate to aggregateAllItemData to combine enriched results.
⚠️ Common Pitfall: Do not add OpenAI credentials to ZeroShot QnA Classifier directly. The credentials must be connected to OpenAI Chat Model.

Step 6: Create Output File

Convert the final dataset into a binary payload and save it as a JSON file.

  1. In Create Binary Payload, keep the Function Code that serializes JSON and base64-encodes it for binary output.
  2. Open Write JSON File and set Operation to write.
  3. Set File Name to =C:\\SERanking_PromptFAQ.json.
  4. Set Data Property Name to =data.
Ensure the n8n instance has write access to C:\ or update the path to a valid directory.

Step 7: Test and Activate Your Workflow

Run the workflow manually, verify the output, and activate it for production use.

  1. Click Execute Workflow from Manual Start Trigger to run a test.
  2. Check that Fetch SERanking Prompts returns data and that each parallel branch produces its expected output.
  3. Verify Merge Enriched Output and Aggregate Final Dataset contain both classification results and custom aggregates.
  4. Confirm Write JSON File creates SERanking_PromptFAQ.json with valid JSON content.
  5. When satisfied, switch the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Troubleshooting Tips

  • SE Ranking credentials can expire or need specific permissions. If things break, check your SE Ranking API key status and header auth format in the HTTP Request node first.
  • If you’re using Wait nodes or external processing, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Quick Answers

What’s the setup time for this SE Ranking FAQs automation?

About 30 minutes if your API keys are ready.

Is coding required for this FAQ intent classification?

No. You’ll mostly configure credentials and edit a few input fields. The “code” nodes are already built into the workflow.

Is n8n free to use for this SE Ranking FAQs workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (often a few cents per batch, depending on how many questions you classify).

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I modify this SE Ranking FAQs workflow for different use cases?

Yes, and you probably should. You can change the intent categories in the ZeroShot QnA Classifier node, tighten the niche by editing include/exclude filters in the Assign Input Parameters node, and swap the Write JSON File step for Google Sheets, a webhook, or a CMS output. A common tweak is filtering out low-confidence items before export so your writers only see “ready” questions.

Why is my SE Ranking connection failing in this workflow?

Usually it’s the header auth format. SE Ranking expects a Token value (with a space) followed by your API key, and a missing space can break the request. Also check that the key is active and permitted for the AI Search endpoint. If it works in SE Ranking but not in n8n, inspect the HTTP Request node’s last response for a clear error message.

What volume can this SE Ranking FAQs workflow process?

It depends mostly on your SE Ranking limits and how many items you request per run, but dozens to a few hundred questions in a batch is realistic.

Is this SE Ranking FAQs automation better than using Zapier or Make?

Often, yes, because this isn’t just “move data from A to B.” You’re merging streams, transforming JSON, running a classifier, and exporting a structured dataset, which is the kind of multi-step logic n8n handles comfortably. Self-hosting also matters if you want to run big refreshes without watching task limits. Zapier or Make can still work if your version is very light (for example, pull data, send to Sheets), but you’ll usually hit complexity walls once you add enrichment and aggregation. Talk to an automation expert if you want help choosing.

Once this is running, FAQ research stops being a one-off project and becomes a repeatable dataset you can refresh whenever priorities change. Honestly, that’s the difference between “we should add FAQs” and a system your team can scale.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal