Telegram to Postgres, verified Apollo leads saved clean
Your lead list shouldn’t get worse every time you “move fast.” But when you’re pulling prospects from Apollo, copying them into sheets, cleaning emails, and trying not to re-upload duplicates, the mess is basically guaranteed.
Growth marketers feel it when campaigns stall because the list is dirty. Sales ops gets stuck fixing it. And founders doing their own outbound end up spending nights on what should’ve been a 2-minute task. This Telegram Apollo leads automation takes a voice note or text request and turns it into verified leads saved cleanly in Postgres.
Below you’ll see exactly how the workflow runs, what it automates, what results you can expect, and the common gotchas to avoid so it works on day one.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Telegram to Postgres, verified Apollo leads saved clean
flowchart LR
subgraph sg0["Telegram Message Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Telegram Message Trigger"]
n1@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Detect Voice Or Text", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Retrieve Voice File"]
n3@{ icon: "mdi:robot", form: "rounded", label: "Transcribe Audio", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Capture Text Input", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "Lead Scraper Agent", pos: "b", h: 48 }
n6@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Engine", pos: "b", h: 48 }
n7@{ icon: "mdi:memory", form: "rounded", label: "Buffer Memory", pos: "b", h: 48 }
n8@{ icon: "mdi:robot", form: "rounded", label: "Structured Parser", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-vertical", form: "rounded", label: "Build Query Payload", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Assemble Search Link"]
n11@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Execute Apify Actor", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Lead Details", pos: "b", h: 48 }
n13@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Filter Verified Emails", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/postgres.svg' width='40' height='40' /></div><br/>Fetch Existing Emails"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/compare.svg' width='40' height='40' /></div><br/>Isolate New Leads"]
n16@{ icon: "mdi:cog", form: "rounded", label: "Marked Existing", pos: "b", h: 48 }
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/supabase.svg' width='40' height='40' /></div><br/>Insert New Leads"]
n19@{ icon: "mdi:swap-vertical", form: "rounded", label: "Compose Telegram Reply", pos: "b", h: 48 }
n20@{ icon: "mdi:cog", form: "rounded", label: "Restrict Results", pos: "b", h: 48 }
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Send Telegram Confirmation"]
n20 --> n21
n4 --> n5
n10 --> n11
n3 --> n5
n12 --> n13
n11 --> n12
n0 --> n1
n5 --> n9
n5 --> n14
n7 -.-> n5
n2 --> n3
n1 --> n2
n1 --> n4
n6 -.-> n5
n19 --> n20
n9 --> n10
n15 --> n16
n15 --> n17
n8 -.-> n5
n17 --> n19
n13 --> n15
n14 --> n15
end
subgraph sg1["Flow 2"]
direction LR
n18@{ icon: "mdi:cog", form: "rounded", label: "Node_18", pos: "b", h: 48 }
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n3,n5,n8 ai
class n6 aiModel
class n7 ai
class n1,n11,n13 decision
class n14 database
class n10 code
classDef customIcon fill:none,stroke:none
class n0,n2,n10,n14,n15,n17,n21 customIcon
The Problem: Lead lists get dirty the moment you scale
Manual lead scraping starts out “fine” until the volume increases. One person exports from Apollo, another person filters “verified” emails, someone else imports into a database, and suddenly you’ve got duplicates everywhere. Then the bounce rate spikes, domains warm slower, and outreach tools start flagging your campaigns. The worst part is the mental overhead. You stop trusting your own data, so every new campaign begins with cleanup instead of launching.
The friction compounds. Here’s where things usually break down.
- Copy-pasting lead data between tools causes small mistakes that turn into bad personalization later.
- Filtering for verified emails happens inconsistently, so deliverability takes a quiet hit over time.
- Duplicates creep in when two teammates run similar searches a week apart.
- “Quick list building” becomes a half-day job once you include exports, formatting, and imports.
The Solution: Telegram request → verified Apollo leads → Postgres
This workflow turns a simple Telegram message into a clean, deduped prospect table in your database. You send a text request like “Find 200 VP Marketing leads in Austin in SaaS” or just record a quick voice note. If it’s voice, the workflow retrieves the audio and transcribes it using OpenAI. Then an AI agent interprets what you meant (location, industry, job titles, company size) and converts that into structured search parameters. From there, it builds the Apollo search payload and runs a scrape via Apify, maps the returned lead fields, filters to verified emails only, and checks your existing Postgres/Supabase records to prevent duplicates. Finally, it inserts only the new leads and sends a Telegram confirmation with a count, so you know it worked.
The workflow starts in Telegram, so requests feel effortless. In the middle, AI turns messy human language into precise search criteria, then Apollo data is fetched, cleaned, and deduped. At the end, your Postgres table updates instantly and you get a “done” message back in Telegram.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you need 300 verified leads for a new outbound campaign. Manually, you might spend about 20 minutes building the Apollo filters, another 30 minutes exporting and cleaning, then 20 minutes deduping and importing into Postgres. Call it about 1.5 to 2 hours, plus the “did we already contact these?” anxiety. With this workflow, you send one Telegram request (maybe 30 seconds), wait under 30 seconds for processing, and you get a confirmation message once new leads are inserted. Your list is ready for outreach almost immediately.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Telegram for the voice/text request trigger
- OpenAI to transcribe voice and interpret requests
- Apollo.io + Apify to fetch lead data reliably
- Postgres or Supabase to store leads and dedupe
- API keys (from OpenAI, Apollo, Apify, and your DB)
Skill level: Intermediate. You’ll connect accounts, paste API keys, and confirm your database table fields match what you want to store.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A Telegram message kicks it off. The workflow uses a Telegram trigger, then checks if you sent text or a voice note. Simple branch, no guesswork.
Voice gets transcribed, text gets captured. If it’s audio, n8n fetches the file and runs it through OpenAI transcription. If it’s already text, it’s saved as-is so the next step sees a consistent input.
An AI agent turns intent into targeting. The agent interprets your request and produces structured search criteria (titles, industries, locations, company size). That structure matters because it prevents “kinda close” searches that pull junk leads.
Apollo leads are fetched, cleaned, and deduped. The workflow assembles a search payload, runs an Apify actor to pull the results, maps key fields (name, title, company, LinkedIn, email), filters to verified emails, and compares against existing records from Postgres/Supabase so repeats don’t get inserted.
You get an instant confirmation. After inserting only new leads, n8n sends a Telegram message back with the count and (optionally) a link to the generated Apollo search.
You can easily modify the search limits and the stored fields based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Telegram Trigger
Set up the workflow to start when a Telegram message arrives so the pipeline can accept voice or text lead requests.
- Add Telegram Message Trigger and keep Updates set to
message. - Credential Required: Connect your telegramApi credentials in Telegram Message Trigger.
- Open your Telegram bot and send a test message to confirm the trigger is reachable.
Step 2: Route Voice vs. Text Inputs
This step branches the flow based on whether the user sent a voice note or text.
- In Detect Voice Or Text, keep the two rules that check
{{ $json.message.voice.file_id }}and{{ $json.message.text }}to create Voice and Text outputs. - For the voice branch, configure Retrieve Voice File with Resource set to
fileand File ID set to{{ $json.message.voice.file_id }}. - Credential Required: Connect your telegramApi credentials in Retrieve Voice File.
- For the text branch, configure Capture Text Input to set text to
{{ $json.message.text }}. - Connect Retrieve Voice File → Transcribe Audio and Capture Text Input → Lead Scraper Agent.
Step 3: Set Up AI Processing for Query Extraction
This step turns voice or text into a structured query that the scraper can use.
- In Transcribe Audio, keep Resource set to
audioand Operation set totranscribe. - Credential Required: Connect your openAiApi credentials in Transcribe Audio.
- Configure Lead Scraper Agent with Text set to
{{ $json.text }}and ensure Prompt Type isdefinewith Has Output Parser enabled. - In OpenAI Chat Engine, set the Model to
gpt-4.1-nano(or your preferred model). - Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
- Attach Buffer Memory and Structured Parser to Lead Scraper Agent as AI sub-nodes. Ensure credentials are added to OpenAI Chat Engine (parent), not to the sub-nodes.
- Confirm Structured Parser uses the provided JSON schema example for
location,business, andjob_title.
Lead Scraper Agent outputs to both Build Query Payload and Fetch Existing Emails in parallel.
Step 4: Build and Execute the Lead Scrape
Generate a search URL, run the Apify actor, and map the lead results into a consistent structure.
- In Build Query Payload, set Mode to
rawand JSON Output to{ "query": {{ $json.output }} }. - In Assemble Search Link, keep the provided JavaScript to build the Apollo URL and output
finalURL. - In Execute Apify Actor, set Operation to
Run actor, Build tolatest, Timeout to10000, and Wait For Finish to60. - Set Custom Body in Execute Apify Actor to
{ "getPersonalEmails": true, "getWorkEmails": true, "totalRecords": 500, "url": "{{ $json.finalURL }}" }. - Credential Required: Connect your apifyApi credentials in Execute Apify Actor.
- In Map Lead Details, map fields like firstName to
{{ $json.first_name }}, emailAddress to{{ $json.email }}, and jobTitle to{{ $json.employment_history[0].title }}.
⚠️ Common Pitfall: If the scraper returns empty employment_history, the expression {{ $json.employment_history[0].title }} will fail. Consider adding guard logic if needed.
Step 5: Filter and Deduplicate Leads
Only verified emails are saved, and existing emails are removed before insertion.
- In Filter Verified Emails, keep the condition emailStatus equals
verifiedusing{{ $json.emailStatus }}. - In Fetch Existing Emails, set Operation to
select, Return All totrue, and include outputColumns ofemailAddress. - Credential Required: Connect your postgres credentials in Fetch Existing Emails.
- In Isolate New Leads, keep Merge By Fields configured to compare incoming emails with existing ones.
- Leave Marked Existing as a no-op branch for duplicates.
Step 6: Store Results and Send Telegram Confirmation
New leads are inserted into Supabase, a confirmation message is composed, and a Telegram reply is sent.
- In Insert New Leads, set Table ID to
Leads_n-mailand map fields like firstName to{{ $json.firstName }}, jobTitle to{{ $json.jobTitle }}, and country to{{ $json.country }}. - Review the businessIndustry mapping:
{{ $json.businessIndustry }}{{ $('Apollo Scraper').item.json.organization.industry }}. - Credential Required: Connect your supabase credentials in Insert New Leads (no credentials are configured yet).
- In Compose Telegram Reply, set output to
{{ $input.all().length }} new contacts have been added to the Google Sheet!. - In Send Telegram Confirmation, set Text to
{{ $input.all().length }} new contacts have been added to the Google Sheet!and Chat ID to[YOUR_ID]. - Credential Required: Connect your telegramApi credentials in Send Telegram Confirmation.
⚠️ Common Pitfall: The workflow references Apollo Scraper in Insert New Leads, but that node does not exist in this workflow. Replace that reference or remove it to prevent execution errors.
Step 7: Review Placeholder and Non-Functional Nodes
This workflow includes non-executing nodes that are used for annotation or future expansion.
- Keep Flowpast Branding as a documentation note in the canvas.
- Identify Unnamed (type
unknown) and remove or replace it if it appears in any future connections.
Step 8: Test and Activate Your Workflow
Validate the workflow end-to-end and then enable it for production use.
- Click Execute Workflow and send a text or voice message to your Telegram bot.
- Confirm that Detect Voice Or Text routes correctly and that Lead Scraper Agent outputs a structured query.
- Verify Execute Apify Actor returns lead data and that Insert New Leads creates records in
Leads_n-mail. - Check that Send Telegram Confirmation delivers a message with the correct count.
- When successful, toggle the workflow to Active to run automatically on new Telegram messages.
Common Gotchas
- Telegram file access can fail if your bot token changes or lacks permission. If audio retrieval breaks, check the Telegram credentials in n8n first, then verify the chat is still authorized.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 45 minutes if your API keys and database are ready.
No. You’ll mostly connect accounts and paste API keys. The only “technical” part is confirming your Postgres/Supabase table columns match the mapped fields.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI transcription/chat usage plus Apollo/Apify costs, which depend on how many leads you pull.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s the point of the build. You can adjust the AI Agent’s instructions so it recognizes your preferred job title patterns, seniority rules, and location formats, then update the “Build Query Payload” mapping so Apollo gets exactly those filters. Common tweaks include restricting to specific seniority levels, adding exclusions (like “recruiter”), and changing the per-request lead cap so you don’t over-pull.
Usually it’s the bot token or chat authorization. Re-check the Telegram credentials in n8n, then confirm the bot is still in the chat and allowed to read messages. If only voice requests fail, the “Retrieve Voice File” step is the clue: Telegram may be returning a file_id your bot can’t fetch until permissions are corrected. Also worth checking basic rate limits if you’re blasting requests back-to-back.
The workflow is designed to process 500+ leads per request, and the practical limit is usually your Apollo/Apify plan plus how big you set the batch and limit nodes. On n8n Cloud Starter, execution limits depend on your plan tier; on self-hosted n8n there’s no execution cap, but your server resources still matter. If you’re pulling big lists, run smaller batches more often, because that keeps the database insert fast and the dedupe check reliable.
Often, yes. This workflow leans on branching logic (voice vs text), structured AI parsing, batching, and dedupe checks against a database, which is where Zapier and Make can get clunky or expensive. n8n also gives you the option to self-host, which is a big deal if you run lead-gen at scale. That said, if you only need “Apollo export → Google Sheet” and nothing else, simpler tools can be fine. If you want help choosing, Talk to an automation expert.
This is the kind of automation that quietly fixes your outbound system. Cleaner leads go in, fewer problems come out.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.