Explorium to Google Sheets, clean prospect lists fast
You finally carve out time to build a prospect list, then you lose it to messy exports, broken filters, and “quick” spreadsheet cleanup that turns into a whole afternoon. The worst part is the doubt. Are you even pulling the right people, or just whatever your last search happened to return?
This is where Explorium Sheets automation pays off. Sales ops teams feel it first when reps demand “fresh leads by tomorrow.” A recruiter chasing niche roles runs into the same friction. And a marketing manager building targeted lists for ABM campaigns? Same story, different label.
This workflow turns plain-English prospect requests into validated Explorium searches, then outputs a clean CSV and pushes structured rows into Google Sheets. You’ll see how it works, what you need, and where the real time savings show up.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Explorium to Google Sheets, clean prospect lists fast
flowchart LR
subgraph sg0["When chat message received Flow"]
direction LR
n0@{ icon: "mdi:cog", form: "rounded", label: "Convert to File", pos: "b", h: 48 }
n1@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n2@{ icon: "mdi:play-circle", form: "rounded", label: "When chat message received", pos: "b", h: 48 }
n3@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Extract 'data'", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Merge All Pages"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Prepare for CSV"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>API Call Validation"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Validation Prompter"]
n9@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Is API Call Valid?", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Chat or Refinement"]
n11@{ icon: "mdi:brain", form: "rounded", label: "Anthropic Chat Model", pos: "b", h: 48 }
n12@{ icon: "mdi:wrench", form: "rounded", label: "Explorium MCP", pos: "b", h: 48 }
n13@{ icon: "mdi:robot", form: "rounded", label: "Output Parser", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Explorium Prospects API Call"]
n3 --> n7
n12 -.-> n3
n13 -.-> n3
n1 -.-> n3
n4 --> n6
n5 --> n4
n6 --> n0
n10 --> n3
n9 --> n14
n9 --> n8
n7 --> n9
n8 --> n10
n11 -.-> n3
n2 --> n10
n14 --> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n2 trigger
class n3,n13 ai
class n11 aiModel
class n12 ai
class n1 ai
class n9 decision
class n14 api
class n5,n6,n7,n8,n10 code
classDef customIcon fill:none,stroke:none
class n5,n6,n7,n8,n10,n14 customIcon
The Problem: Prospecting Turns Into Spreadsheet Janitorial Work
Prospecting should be about targeting, not translating. But most teams still do the same dance: someone writes a request in Slack (“Find VPs of Sales at B2B SaaS in New York”), then a more technical person tries to convert that into the right filters, runs a few searches, exports a file, and cleans it up so it won’t break the CRM import. Then the real fun starts. Duplicated columns, inconsistent locations, missing contact flags, and random formatting that makes the list look “done” but not usable.
It adds up fast. Here’s where it breaks down.
- You waste about 1–2 hours per list translating “human” requests into workable filters.
- Small formatting mistakes in CSV exports cause failed imports, so people rerun the whole thing instead of fixing one field.
- Validation happens late, which means you find out the payload was wrong after the API call fails or returns junk.
- Follow-up requests (“add Directors,” “filter by 100+ employees”) reset the process because context isn’t saved cleanly.
The Solution: Plain-English Queries → Valid Explorium Filters → Clean Sheets Output
This workflow gives you a chat-style interface where you ask for prospects in normal language, then n8n handles the translation, validation, retrieval, and formatting. A chat message triggers the flow. An AI agent interprets what you meant (titles, industry, location, company size, revenue bands), converts it into Explorium-compatible parameters, and runs a validation check before anything expensive happens. If the payload fails validation, the workflow loops back, asks the model to correct the request structure, and tries again. Once it passes, it pulls prospect data from Explorium (including pagination for bigger lists), formats each row consistently, and generates a downloadable CSV. You also get structured output ready to land in Google Sheets for review, sharing, and CRM prep.
The workflow starts when you send a message in the n8n chat trigger. AI translates your request into Explorium API filters, then a validator checks allowed keys and value formats. Finally, the prospect results are combined across pages, cleaned into CSV-ready rows, and delivered as a file plus a neat Google Sheets-friendly dataset.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you need one targeted list per week for outreach. Manually, it’s usually about 30 minutes to translate the request into filters, another 20 minutes to run searches and export, then about 40 minutes cleaning columns and standardizing locations before it’s safe to import. Call it roughly 2 hours. With this workflow, you drop the request in chat (a minute), the workflow validates and pulls results (often 5–10 minutes), then you get a clean CSV plus a Sheets-ready output. You’re back to doing actual targeting, not cleanup.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Explorium API credentials to fetch prospects via bearer token
- Google Sheets to review and share clean results
- Anthropic API key (get it from your Anthropic Console)
Skill level: Intermediate. You’ll connect credentials, paste API tokens, and adjust a couple of nodes if your output fields need tweaking.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A chat message kicks everything off. You type what you want (“Marketing Directors at SaaS companies in New York, 50–200 employees”), and n8n receives it through the chat trigger node.
The workflow keeps context as you refine. A session memory buffer remembers what you asked for, so “add Directors too” or “filter for revenue over $10M” doesn’t require starting over.
AI translates your request into a safe, valid search. The language agent uses an LLM (Anthropic chat engine in this template) plus a structured output parser to produce Explorium-ready parameters. Then a validation script checks allowed filter keys, expected formats (like country codes), and proper range fields.
Explorium results get collected and cleaned. Once the payload is valid, the workflow runs the prospect API request, combines results across pages, splits items for row-level formatting, and generates a downloadable CSV file. That same structured dataset can land in Google Sheets for easy review.
You can easily modify the output columns to match your CRM import template based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Chat Trigger
This workflow starts when a user submits a chat message, which is then routed into the AI processing flow.
- Add the Incoming Chat Trigger node as your trigger.
- Keep the default settings in Incoming Chat Trigger unless you need to change its Webhook or chat options.
- Connect Incoming Chat Trigger to Merge Chat or Fixes to begin the processing chain.
Step 2: Connect Explorium API Credentials
The workflow calls Explorium’s MCP API and needs authentication for both the tool connection and the API request.
- Open Prospect API Request and set the URL to
=https://api.explorium.ai/v1/prospects. - Credential Required: Connect your
httpHeaderAuthcredentials in Prospect API Request. - Credential Required: Connect your
httpHeaderAuthcredentials for the Explorium MCP tool used by Language Agent Core. The Explorium MCP Tool is a sub-node; credentials should be set on the parent agent connection.
Step 3: Set Up the AI Agent and Memory
The AI layer converts chat input into a validated JSON payload for the MCP API and supports tool calls and memory.
- In Merge Chat or Fixes, keep the logic that sets combinedInput from error feedback or user input.
- Open Language Agent Core and set Text to
{{ $json.combinedInput }}. - Ensure Language Agent Core has Has Output Parser enabled and the custom system message remains intact for MCP format rules.
- Attach Session Memory Buffer to Language Agent Core and set Context Window Length to
100. - Connect Structured Output Parser to Language Agent Core and keep the JSON schema example as provided.
- Connect Anthropic Chat Engine as the language model for Language Agent Core and select the model
claude-sonnet-4-20250514. - Credential Required: Connect your
anthropicApicredentials in Anthropic Chat Engine.
Step 4: Configure Validation and the Correction Loop
Invalid MCP payloads are caught, corrected, and sent back into the AI agent until valid.
- Keep the validation logic in Validate API Payload as-is to enforce allowed filters and formats.
- In Check Payload Validity, confirm the condition checks
{{ $json.isValid }}is true. - Ensure Check Payload Validity sends valid output to Prospect API Request and invalid output to Request Correction Prompt.
- Keep Request Correction Prompt connected back to Merge Chat or Fixes to loop corrected prompts into the AI agent.
Step 5: Configure API Response Processing and File Output
Results are paginated, merged, transformed into CSV-friendly rows, and converted to a file.
- In Prospect API Request, keep JSON Body set to
{{ $json.output }}and ensure Method isPOST. - Verify pagination in Prospect API Request uses
{{ $response.body.page + 1 }}and completes when{{ $response.body.data.length === 0 }}. - Use Combine Page Results to merge all paginated data arrays into a single
dataarray. - In Split Data Field, set Field to Split Out to
data. - Keep the row mapping in Format Rows for CSV to normalize prospect fields for export.
- Connect Generate File Output to convert the formatted rows into a file output.
Step 6: Test and Activate Your Workflow
Validate the end-to-end flow using a live chat input, then enable the workflow for production use.
- Click Execute Workflow and send a chat message to Incoming Chat Trigger.
- Confirm Language Agent Core outputs a valid MCP request and Validate API Payload sets
isValidtotrue. - Verify Prospect API Request returns data, Combine Page Results merges pages, and Generate File Output produces the file.
- Once testing succeeds, toggle the workflow Active to enable it in production.
Common Gotchas
- Explorium credentials can expire or need specific permissions. If things break, check your Bearer token header in the MCP Client Tool and the Prospect API Request nodes first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30–60 minutes if you already have your API keys.
No. You’ll mostly connect credentials and edit a couple of fields. The code nodes are already built; you only tweak them if you want different columns.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Anthropic API usage plus your Explorium subscription costs.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Most people customize the “Format Rows for CSV” code node to match their CRM’s import headers, then adjust “Generate File Output” so the CSV uses the exact column order they expect. Common tweaks include adding a “has_email” flag, splitting location into country/region/city, and keeping LinkedIn URLs in a dedicated field. If your team wants enrichment fields too, you can also expand the mapping to include company revenue and employee ranges.
Usually it’s an expired or misformatted Bearer token in the Header Auth credentials. Update the credentials used by both the MCP client tool and the Prospect API Request, then re-run a test chat. If you’re still stuck, your Explorium plan may be rate limiting you or blocking a filter combination, so check the API response body in the n8n execution logs.
Explorium searches typically cap at 10,000 results, and this workflow paginates up to that limit. On n8n Cloud, capacity mainly depends on your monthly executions. If you self-host, there’s no execution cap, but large lists still take longer and can hit API rate limits.
Often, yes. This workflow needs conditional logic, validation, retries, pagination, and structured AI output, and that gets clunky (and pricey) in tools built for simple “A to B” zaps. n8n also gives you the option to self-host, which is a big deal if you’re generating lots of lists. On the other hand, if you only need a basic one-time export with no validation loop, Zapier or Make can be quicker to set up. The decision usually comes down to volume and how strict you want the output formatting to be. Talk to an automation expert if you’re not sure which fits.
Clean lists aren’t glamorous, but they change everything downstream. Set this up once, and you’ll stop spending your best hours fixing CSVs.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.