OpenAI + SerpAPI: product shortlists you can trust
You start with a simple question: “What’s the best [thing] to buy?” Then you fall into a tab spiral. Ten retailers, five review sites, conflicting “best of” lists, and prices that change before you even finish comparing.
This product research automation hits marketing teams who need quick, defensible recommendations for campaigns. But small business owners making equipment purchases feel it too. Same with agency leads who get asked for “top picks” in Slack at 4:45 PM.
This workflow turns one chat message into a five-product shortlist with current pricing, where to buy, and a readable review summary. You’ll see how it works, what it replaces, and what you need to run it reliably.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: OpenAI + SerpAPI: product shortlists you can trust
flowchart LR
subgraph sg0["Chat Message Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Chat Message Trigger", pos: "b", h: 48 }
n1@{ icon: "mdi:brain", form: "rounded", label: "Primary Chat Model", pos: "b", h: 48 }
n2@{ icon: "mdi:memory", form: "rounded", label: "Conversation Window Memory", pos: "b", h: 48 }
n3@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool A", pos: "b", h: 48 }
n4@{ icon: "mdi:robot", form: "rounded", label: "Structured Result Parser", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Combine Reviewer Outputs"]
n6@{ icon: "mdi:cog", form: "rounded", label: "Summarize Reviews", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "Final Chat Model", pos: "b", h: 48 }
n8@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool B", pos: "b", h: 48 }
n9@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool C", pos: "b", h: 48 }
n10@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool D", pos: "b", h: 48 }
n11@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool E", pos: "b", h: 48 }
n12@{ icon: "mdi:brain", form: "rounded", label: "Reviewer Model 1", pos: "b", h: 48 }
n13@{ icon: "mdi:wrench", form: "rounded", label: "Search API Tool F", pos: "b", h: 48 }
n14@{ icon: "mdi:brain", form: "rounded", label: "Reviewer Model 2", pos: "b", h: 48 }
n15@{ icon: "mdi:brain", form: "rounded", label: "Reviewer Model 3", pos: "b", h: 48 }
n16@{ icon: "mdi:brain", form: "rounded", label: "Reviewer Model 4", pos: "b", h: 48 }
n17@{ icon: "mdi:brain", form: "rounded", label: "Reviewer Model 5", pos: "b", h: 48 }
n18@{ icon: "mdi:robot", form: "rounded", label: "Product Discovery Agent", pos: "b", h: 48 }
n19@{ icon: "mdi:robot", form: "rounded", label: "Review Agent One", pos: "b", h: 48 }
n20@{ icon: "mdi:robot", form: "rounded", label: "Review Agent Two", pos: "b", h: 48 }
n21@{ icon: "mdi:robot", form: "rounded", label: "Review Agent Three", pos: "b", h: 48 }
n22@{ icon: "mdi:robot", form: "rounded", label: "Review Agent Four", pos: "b", h: 48 }
n23@{ icon: "mdi:robot", form: "rounded", label: "Review Agent Five", pos: "b", h: 48 }
n24@{ icon: "mdi:robot", form: "rounded", label: "Final Report Compiler", pos: "b", h: 48 }
n5 --> n6
n8 -.-> n19
n3 -.-> n18
n13 -.-> n20
n9 -.-> n21
n10 -.-> n22
n11 -.-> n23
n6 --> n24
n19 --> n5
n20 --> n5
n21 --> n5
n22 --> n5
n23 --> n5
n18 --> n19
n18 --> n20
n18 --> n21
n18 --> n22
n18 --> n23
n7 -.-> n24
n1 -.-> n18
n12 -.-> n19
n14 -.-> n20
n15 -.-> n21
n16 -.-> n22
n17 -.-> n23
n2 -.-> n18
n4 -.-> n18
n0 --> n18
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n4,n18,n19,n20,n21,n22,n23,n24 ai
class n1,n7,n12,n14,n15,n16,n17 aiModel
class n3,n8,n9,n10,n11,n13 ai
class n2 ai
classDef customIcon fill:none,stroke:none
class n5 customIcon
The Challenge: Fast product research you can actually defend
Buying decisions turn into research projects. Even when you “just need five options,” you still have to define what good looks like, search, filter out junk, cross-check specs, and confirm the price isn’t from six months ago. Then comes the messy part: summarizing reviews in a way that doesn’t cherry-pick, and presenting it so a teammate can skim it without asking you twelve follow-up questions. The time cost is obvious, but the mental load is the real tax. You keep re-checking because you don’t fully trust your own notes.
It adds up fast. And the breakpoints are always the same:
- You end up comparing products in different formats, so “apples to apples” becomes guesswork.
- Prices and availability change, which means your shortlist gets stale before anyone approves it.
- Review research gets biased because you only read what you have time to read.
- The final “report” lives in a chat thread or a half-finished doc that nobody can reuse next time.
The Fix: One message becomes a complete buying report
This workflow starts with a chat message where you type what you want to buy (for example: “gaming desktop computer,” “mid-size three row SUV,” or “golf driver”). That message kicks off an “Item Finder” AI agent using OpenAI (GPT-4o) plus SerpAPI to search the web and identify five high-quality, modern options that match your request. Each of those five product names then gets sent to its own reviewer agent, which pulls fresh info from the internet and turns it into a structured mini-brief: key features, the lowest price it can find, retailer options, and an honest review summary with overall star ratings. Finally, a compiler agent (using GPT-4o-mini) merges everything into a clean, readable report you can share internally without rewriting it.
The workflow begins in chat, then branches into five parallel review tracks. After that, it recombines the outputs, summarizes the review signals, and compiles one final shortlist you can paste into an email, a doc, or a client update.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you need a shortlist for one purchase request each week. Manually, a realistic flow is about 10 minutes to find five candidates, then roughly 20 minutes per product to check pricing, sellers, and reviews. That’s around 2 hours, and it’s easy to lose another hour polishing the write-up. With this workflow, you send one chat message (under a minute), wait a few minutes for the agents to run, and you get a ready-to-share buying report. That’s usually a couple hours back per request.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- OpenAI for GPT-4o and GPT-4o-mini generation
- SerpAPI to pull current web results
- OpenAI API key (get it from platform.openai.com/api-keys)
Skill level: Intermediate. You’ll connect API keys, test prompts, and validate outputs.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A chat message kicks it off. You type a product request into the n8n chat trigger, like “work laptop for video editing” or “best standing desk for tall people.” The workflow keeps some conversational context using a memory window, which helps when you refine the request.
The workflow turns your request into smart searches. An AI agent uses OpenAI to generate search queries, then calls SerpAPI to pull fresh results. Those results are parsed into a structured format so the rest of the workflow isn’t guessing what’s a product name versus a random mention.
Five reviewers do the heavy lifting in parallel. Each reviewer agent takes one product, searches again with SerpAPI for up-to-date pricing, retailers, and credible review signals, then summarizes what matters. This is where you get the “pros and cons” and the overall star rating people actually want.
Everything gets merged into one report. The workflow combines the five reviews, aggregates the review themes, and uses a final compiler agent to format a clean shortlist you can share without rewriting.
You can easily modify the number of products (five) to three or ten based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Chat Trigger
This workflow starts when a new chat message is received, so you’ll configure the entry point first.
- Add the Chat Message Trigger node and keep its default parameters.
- Copy the generated webhook URL from Chat Message Trigger for use in your chat interface or testing tool.
- Connect Chat Message Trigger to Product Discovery Agent.
Step 2: Set Up the Core Product Discovery Agent
The main agent orchestrates discovery and requires AI and tool sub-components to be attached as sub-nodes.
- Open Product Discovery Agent and verify it is connected to Primary Chat Model as the language model.
- Attach Conversation Window Memory to Product Discovery Agent via the AI memory connection.
- Attach Structured Result Parser to Product Discovery Agent via the AI output parser connection.
- Attach Search API Tool A to Product Discovery Agent via the AI tool connection.
- Credential Required: Connect your OpenAI credentials on Primary Chat Model.
- Credential Required: Connect your SerpApi credentials on Search API Tool A (credentials are added to the parent agent, not the tool sub-node).
⚠️ Common Pitfall: AI tool sub-nodes like Conversation Window Memory and Structured Result Parser do not store credentials themselves—add credentials to Primary Chat Model and Search API Tool A as required by Product Discovery Agent.
Step 3: Configure Parallel Reviewer Agents
After discovery, five review agents run in parallel to validate and expand the findings.
- Connect Product Discovery Agent outputs to Review Agent One, Review Agent Two, Review Agent Three, Review Agent Four, and Review Agent Five so they run simultaneously.
- Verify each review agent is connected to its paired model: Reviewer Model 1 → Review Agent One, Reviewer Model 2 → Review Agent Two, Reviewer Model 3 → Review Agent Three, Reviewer Model 4 → Review Agent Four, Reviewer Model 5 → Review Agent Five.
- Attach the matching tool nodes: Search API Tool B → Review Agent One, Search API Tool F → Review Agent Two, Search API Tool C → Review Agent Three, Search API Tool D → Review Agent Four, Search API Tool E → Review Agent Five.
- Product Discovery Agent outputs to both Review Agent One and Review Agent Two and Review Agent Three and Review Agent Four and Review Agent Five in parallel.
- Credential Required: Connect your OpenAI credentials on all reviewer models (Reviewer Model 1 through Reviewer Model 5).
- Credential Required: Connect your SerpApi credentials on all reviewer tools (Search API Tool B, Search API Tool C, Search API Tool D, Search API Tool E, Search API Tool F).
Tip: Because there are many AI nodes (7 OpenAI chat models and 6 SerpApi tools), it’s easiest to create one credential entry for OpenAI and one for SerpApi, then select them across all related nodes.
Step 4: Combine and Summarize Reviewer Outputs
This stage merges parallel review results and condenses them into a unified summary.
- Connect Review Agent One, Review Agent Two, Review Agent Three, Review Agent Four, and Review Agent Five into Combine Reviewer Outputs.
- Connect Combine Reviewer Outputs to Summarize Reviews.
- Ensure Summarize Reviews remains configured as the aggregation step (default settings).
Step 5: Compile the Final Report
The final agent produces the report using a dedicated chat model.
- Connect Summarize Reviews to Final Report Compiler.
- Verify Final Chat Model is connected as the language model for Final Report Compiler.
- Credential Required: Connect your OpenAI credentials on Final Chat Model.
Step 6: Review Optional Documentation Notes
The workflow includes a branding note for reference and documentation.
- Keep Flowpast Branding as a visual reference; it does not affect execution.
Step 7: Test & Activate Your Workflow
Run a manual test to confirm the end-to-end flow before turning it on.
- Click Execute Workflow and send a sample message to Chat Message Trigger.
- Confirm that Product Discovery Agent triggers all five reviewers in parallel and that outputs merge into Combine Reviewer Outputs.
- Verify Summarize Reviews aggregates results and Final Report Compiler produces a structured response.
- When successful, toggle the workflow to Active for production use.
Watch Out For
- SerpAPI credentials can expire or need specific permissions. If things break, check your SerpAPI dashboard usage and key status first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About 30 minutes if you already have your OpenAI and SerpAPI keys.
Yes, but you’ll want one careful owner for setup and testing. No coding is required, though you do need to paste API keys and validate the outputs.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage (this workflow costs about $0.06 per run) and SerpAPI usage (it can take around 8–15 searches per run).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Start by tightening the prompt your reviewers use so “top picks” matches your definition (budget, size, region, use case). You can also change the Product Discovery Agent to return three products instead of five, or add a sixth reviewer if you want a “budget pick.” If you prefer a spreadsheet deliverable, add a Google Sheets or Excel 365 write step after the Final Report Compiler. For teams doing repeat purchases, store the final report in Airtable so you can filter by category later.
Usually it’s an invalid or exhausted API key. Check your SerpAPI account usage, confirm the key is pasted into every SerpAPI tool node, and watch for rate limits if you run multiple requests back-to-back.
On n8n Cloud, capacity depends on your plan’s monthly executions, while self-hosting has no fixed execution cap (it mainly depends on your server). Practically, SerpAPI limits and OpenAI throughput are the real bottlenecks, and this workflow can run several requests per hour comfortably for a small team.
Often, yes, because this flow needs branching, merging, and multi-agent logic that gets awkward fast in simpler builders. n8n also gives you a self-hosted path, which matters if you plan to run lots of research requests without watching task limits. That said, if your goal is only “send a search result to a sheet,” Zapier or Make can be totally fine. This workflow is more like a mini research system than a two-step zap. Talk to an automation expert if you want help choosing the simplest option that still holds up.
When product research stops being a time sink, you make better calls faster. Let the workflow do the digging, then use your judgment where it actually matters.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.