🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 21, 2026

OpenAI + SerpAPI: product shortlists you can trust

Lisa Granqvist Partner Workflow Automation Expert

You start with a simple question: “What’s the best [thing] to buy?” Then you fall into a tab spiral. Ten retailers, five review sites, conflicting “best of” lists, and prices that change before you even finish comparing.

This product research automation hits marketing teams who need quick, defensible recommendations for campaigns. But small business owners making equipment purchases feel it too. Same with agency leads who get asked for “top picks” in Slack at 4:45 PM.

This workflow turns one chat message into a five-product shortlist with current pricing, where to buy, and a readable review summary. You’ll see how it works, what it replaces, and what you need to run it reliably.

How This Automation Works

See how this solves the problem:

n8n Workflow Template: OpenAI + SerpAPI: product shortlists you can trust

The Challenge: Fast product research you can actually defend

Buying decisions turn into research projects. Even when you “just need five options,” you still have to define what good looks like, search, filter out junk, cross-check specs, and confirm the price isn’t from six months ago. Then comes the messy part: summarizing reviews in a way that doesn’t cherry-pick, and presenting it so a teammate can skim it without asking you twelve follow-up questions. The time cost is obvious, but the mental load is the real tax. You keep re-checking because you don’t fully trust your own notes.

It adds up fast. And the breakpoints are always the same:

  • You end up comparing products in different formats, so “apples to apples” becomes guesswork.
  • Prices and availability change, which means your shortlist gets stale before anyone approves it.
  • Review research gets biased because you only read what you have time to read.
  • The final “report” lives in a chat thread or a half-finished doc that nobody can reuse next time.

The Fix: One message becomes a complete buying report

This workflow starts with a chat message where you type what you want to buy (for example: “gaming desktop computer,” “mid-size three row SUV,” or “golf driver”). That message kicks off an “Item Finder” AI agent using OpenAI (GPT-4o) plus SerpAPI to search the web and identify five high-quality, modern options that match your request. Each of those five product names then gets sent to its own reviewer agent, which pulls fresh info from the internet and turns it into a structured mini-brief: key features, the lowest price it can find, retailer options, and an honest review summary with overall star ratings. Finally, a compiler agent (using GPT-4o-mini) merges everything into a clean, readable report you can share internally without rewriting it.

The workflow begins in chat, then branches into five parallel review tracks. After that, it recombines the outputs, summarizes the review signals, and compiles one final shortlist you can paste into an email, a doc, or a client update.

What Changes: Before vs. After

Real-World Impact

Say you need a shortlist for one purchase request each week. Manually, a realistic flow is about 10 minutes to find five candidates, then roughly 20 minutes per product to check pricing, sellers, and reviews. That’s around 2 hours, and it’s easy to lose another hour polishing the write-up. With this workflow, you send one chat message (under a minute), wait a few minutes for the agents to run, and you get a ready-to-share buying report. That’s usually a couple hours back per request.

Requirements

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • OpenAI for GPT-4o and GPT-4o-mini generation
  • SerpAPI to pull current web results
  • OpenAI API key (get it from platform.openai.com/api-keys)

Skill level: Intermediate. You’ll connect API keys, test prompts, and validate outputs.

Need help implementing this? Talk to an automation expert (free 15-minute consultation).

The Workflow Flow

A chat message kicks it off. You type a product request into the n8n chat trigger, like “work laptop for video editing” or “best standing desk for tall people.” The workflow keeps some conversational context using a memory window, which helps when you refine the request.

The workflow turns your request into smart searches. An AI agent uses OpenAI to generate search queries, then calls SerpAPI to pull fresh results. Those results are parsed into a structured format so the rest of the workflow isn’t guessing what’s a product name versus a random mention.

Five reviewers do the heavy lifting in parallel. Each reviewer agent takes one product, searches again with SerpAPI for up-to-date pricing, retailers, and credible review signals, then summarizes what matters. This is where you get the “pros and cons” and the overall star rating people actually want.

Everything gets merged into one report. The workflow combines the five reviews, aggregates the review themes, and uses a final compiler agent to format a clean shortlist you can share without rewriting.

You can easily modify the number of products (five) to three or ten based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Chat Trigger

This workflow starts when a new chat message is received, so you’ll configure the entry point first.

  1. Add the Chat Message Trigger node and keep its default parameters.
  2. Copy the generated webhook URL from Chat Message Trigger for use in your chat interface or testing tool.
  3. Connect Chat Message Trigger to Product Discovery Agent.

Step 2: Set Up the Core Product Discovery Agent

The main agent orchestrates discovery and requires AI and tool sub-components to be attached as sub-nodes.

  1. Open Product Discovery Agent and verify it is connected to Primary Chat Model as the language model.
  2. Attach Conversation Window Memory to Product Discovery Agent via the AI memory connection.
  3. Attach Structured Result Parser to Product Discovery Agent via the AI output parser connection.
  4. Attach Search API Tool A to Product Discovery Agent via the AI tool connection.
  5. Credential Required: Connect your OpenAI credentials on Primary Chat Model.
  6. Credential Required: Connect your SerpApi credentials on Search API Tool A (credentials are added to the parent agent, not the tool sub-node).

⚠️ Common Pitfall: AI tool sub-nodes like Conversation Window Memory and Structured Result Parser do not store credentials themselves—add credentials to Primary Chat Model and Search API Tool A as required by Product Discovery Agent.

Step 3: Configure Parallel Reviewer Agents

After discovery, five review agents run in parallel to validate and expand the findings.

  1. Connect Product Discovery Agent outputs to Review Agent One, Review Agent Two, Review Agent Three, Review Agent Four, and Review Agent Five so they run simultaneously.
  2. Verify each review agent is connected to its paired model: Reviewer Model 1Review Agent One, Reviewer Model 2Review Agent Two, Reviewer Model 3Review Agent Three, Reviewer Model 4Review Agent Four, Reviewer Model 5Review Agent Five.
  3. Attach the matching tool nodes: Search API Tool BReview Agent One, Search API Tool FReview Agent Two, Search API Tool CReview Agent Three, Search API Tool DReview Agent Four, Search API Tool EReview Agent Five.
  4. Product Discovery Agent outputs to both Review Agent One and Review Agent Two and Review Agent Three and Review Agent Four and Review Agent Five in parallel.
  5. Credential Required: Connect your OpenAI credentials on all reviewer models (Reviewer Model 1 through Reviewer Model 5).
  6. Credential Required: Connect your SerpApi credentials on all reviewer tools (Search API Tool B, Search API Tool C, Search API Tool D, Search API Tool E, Search API Tool F).

Tip: Because there are many AI nodes (7 OpenAI chat models and 6 SerpApi tools), it’s easiest to create one credential entry for OpenAI and one for SerpApi, then select them across all related nodes.

Step 4: Combine and Summarize Reviewer Outputs

This stage merges parallel review results and condenses them into a unified summary.

  1. Connect Review Agent One, Review Agent Two, Review Agent Three, Review Agent Four, and Review Agent Five into Combine Reviewer Outputs.
  2. Connect Combine Reviewer Outputs to Summarize Reviews.
  3. Ensure Summarize Reviews remains configured as the aggregation step (default settings).

Step 5: Compile the Final Report

The final agent produces the report using a dedicated chat model.

  1. Connect Summarize Reviews to Final Report Compiler.
  2. Verify Final Chat Model is connected as the language model for Final Report Compiler.
  3. Credential Required: Connect your OpenAI credentials on Final Chat Model.

Step 6: Review Optional Documentation Notes

The workflow includes a branding note for reference and documentation.

  1. Keep Flowpast Branding as a visual reference; it does not affect execution.

Step 7: Test & Activate Your Workflow

Run a manual test to confirm the end-to-end flow before turning it on.

  1. Click Execute Workflow and send a sample message to Chat Message Trigger.
  2. Confirm that Product Discovery Agent triggers all five reviewers in parallel and that outputs merge into Combine Reviewer Outputs.
  3. Verify Summarize Reviews aggregates results and Final Report Compiler produces a structured response.
  4. When successful, toggle the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Watch Out For

  • SerpAPI credentials can expire or need specific permissions. If things break, check your SerpAPI dashboard usage and key status first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Common Questions

How quickly can I implement this product research automation?

About 30 minutes if you already have your OpenAI and SerpAPI keys.

Can non-technical teams implement this product research automation?

Yes, but you’ll want one careful owner for setup and testing. No coding is required, though you do need to paste API keys and validate the outputs.

Is n8n free to use for this product research automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage (this workflow costs about $0.06 per run) and SerpAPI usage (it can take around 8–15 searches per run).

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

How do I adapt this product research automation solution to my specific challenges?

Start by tightening the prompt your reviewers use so “top picks” matches your definition (budget, size, region, use case). You can also change the Product Discovery Agent to return three products instead of five, or add a sixth reviewer if you want a “budget pick.” If you prefer a spreadsheet deliverable, add a Google Sheets or Excel 365 write step after the Final Report Compiler. For teams doing repeat purchases, store the final report in Airtable so you can filter by category later.

Why is my SerpAPI connection failing in this workflow?

Usually it’s an invalid or exhausted API key. Check your SerpAPI account usage, confirm the key is pasted into every SerpAPI tool node, and watch for rate limits if you run multiple requests back-to-back.

What’s the capacity of this product research automation solution?

On n8n Cloud, capacity depends on your plan’s monthly executions, while self-hosting has no fixed execution cap (it mainly depends on your server). Practically, SerpAPI limits and OpenAI throughput are the real bottlenecks, and this workflow can run several requests per hour comfortably for a small team.

Is this product research automation better than using Zapier or Make?

Often, yes, because this flow needs branching, merging, and multi-agent logic that gets awkward fast in simpler builders. n8n also gives you a self-hosted path, which matters if you plan to run lots of research requests without watching task limits. That said, if your goal is only “send a search result to a sheet,” Zapier or Make can be totally fine. This workflow is more like a mini research system than a two-step zap. Talk to an automation expert if you want help choosing the simplest option that still holds up.

When product research stops being a time sink, you make better calls faster. Let the workflow do the digging, then use your judgment where it actually matters.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal