Telegram + Perplexity: cited answers in your chat
You ask a “quick research question” in chat, and suddenly you’re juggling five tabs, a half-trusted AI summary, and zero idea where the facts came from.
Founders feel it when they’re prepping a pitch and need sources fast. Analysts feel it when they have to defend every claim in a doc. And client-facing consultants get stuck rewriting answers because the first one wasn’t auditable. A Telegram Perplexity bot fixes that.
This workflow turns your Telegram bot into a research assistant that answers with citations, remembers the thread, and only responds to approved users. You’ll see how it works, what you need, and where teams usually trip up.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Telegram + Perplexity: cited answers in your chat
flowchart LR
subgraph sg0["Incoming Telegram Hook Flow"]
direction LR
n0@{ icon: "mdi:robot", form: "rounded", label: "Research Orchestrator", pos: "b", h: 48 }
n1@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Engine", pos: "b", h: 48 }
n2@{ icon: "mdi:memory", form: "rounded", label: "Session Buffer Memory", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "Perplexity Quick Search", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "Perplexity Deep Search", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Incoming Telegram Hook"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Telegram Response Sender"]
n7@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Access Validation Filter", pos: "b", h: 48 }
n7 --> n0
n0 --> n6
n3 -.-> n0
n5 --> n7
n1 -.-> n0
n4 -.-> n0
n2 -.-> n0
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n5 trigger
class n0 ai
class n1 aiModel
class n2 ai
class n7 decision
classDef customIcon fill:none,stroke:none
class n5,n6 customIcon
The Challenge: Getting trustworthy research answers inside chat
Chat is where questions happen in real life. A teammate asks for competitor pricing, a client wants “one credible source,” or you’re trying to recall what you decided last week. The mess starts when answers aren’t traceable. You paste an AI response, someone asks “source?”, and now you’re backtracking through browser history and half-remembered prompts. Even worse, follow-up questions lose context, so you restate everything and still don’t get a clean, cited explanation.
The friction compounds. Here’s where it breaks down.
- You waste about an hour a day re-searching things you already asked last week because the context is gone.
- Answers arrive without citations, so you can’t confidently send them to a client, a boss, or a compliance-minded teammate.
- “One more follow-up” turns into a brand-new prompt, which means inconsistent results and more manual cleanup.
- If your Telegram bot link gets shared, anyone can query it unless you explicitly lock it down.
The Fix: A Telegram research assistant with citations and memory
This n8n workflow listens for new messages in Telegram, checks if the sender is approved, and then routes the question into a research “orchestrator” that decides how deep to go. For simple lookups, it can use Perplexity Sonar for speed. When the question needs multi-source synthesis, it switches to Sonar Pro and comes back with a tighter, more defensible summary. Throughout the conversation, it keeps a short session memory tied to the Telegram chat ID, so follow-ups like “compare that to last year” still make sense. Finally, it sends the reply straight back to Telegram with clickable source links so you can verify the claim in seconds.
The workflow starts with an incoming Telegram message and an access check. Then an AI agent combines context memory with Perplexity tools to produce a cited answer. The final output is a clean Telegram reply you can forward or paste into a doc without feeling nervous about the facts.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say your team asks 20 research questions a week in Telegram. Manually, even “quick” research often means about 10 minutes to prompt, verify, and paste sources, so that’s roughly 3 hours gone. With this automation, you send the question once, wait for the response, and the citations come back with it; your active time drops to about 2 minutes per question. That’s around 2 hours back every week, and the answers are easier to trust.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Telegram for the bot and chat interface
- Perplexity to generate cited research answers
- OpenAI API key (get it from the OpenAI dashboard)
Skill level: Beginner. You’ll paste API keys, set your allowed Telegram user ID, and test the bot.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A Telegram message triggers everything. The workflow starts the moment someone sends your bot a question through Telegram.
Access is validated before anything else runs. An approval filter checks the Telegram user ID so only the people you allow can use the research assistant.
The agent builds an answer using memory plus Perplexity. The AI agent pulls in recent context from session memory (tied to the chat ID), then chooses between quick search and deep search based on how complex the request is.
A cited reply is sent back to Telegram. The response includes source URLs from Perplexity so you can click through and verify key claims immediately.
You can easily modify the allowed-user list to include a whole team based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Telegram Trigger
Set up the workflow entry point so it listens for incoming Telegram messages.
- Add and open Incoming Telegram Hook.
- Set Updates to
message. - Credential Required: Connect your Telegram credentials for Incoming Telegram Hook (not configured in the workflow JSON).
Step 2: Restrict Access with the Filter
Limit usage to approved Telegram user IDs using the access gate.
- Open Access Validation Filter.
- Set the condition’s Left Value to
={{ $json.message.from.id }}. - Replace Right Value with your Telegram user ID (currently
[YOUR_ID]). - Confirm the execution path is Incoming Telegram Hook → Access Validation Filter → Research Orchestrator.
Step 3: Set Up the Research AI Orchestrator
Configure the AI agent, memory, and research tools that generate responses.
- Open Research Orchestrator and set Text to
={{ $json.chatInput }} {{ $json.message.text }}. - Keep the System Message as provided to enforce research depth and citation requirements.
- Open OpenAI Chat Engine and select the model
gpt-4o-mini. - Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
- Open Session Buffer Memory and set Session Key to
={{ $json.message.chat.id }}with Session ID Type set tocustomKey. - Open Perplexity Quick Search and set Model to
sonar. - Open Perplexity Deep Search and set Model to
sonar-pro. - Credential Required: Connect your perplexityApi credentials for both Perplexity Quick Search and Perplexity Deep Search. These AI tools are connected to Research Orchestrator—ensure the credentials are available to the parent orchestration flow.
Step 4: Configure the Telegram Response Output
Return the AI-generated response back to the same Telegram chat.
- Open Telegram Response Sender.
- Set Text to
={{ $json.output }}. - Set Chat ID to
={{ $('Incoming Telegram Hook').item.json.message.chat.id }}. - Credential Required: Connect your Telegram credentials for Telegram Response Sender (not configured in the workflow JSON).
Step 5: Test and Activate Your Workflow
Validate the full path from Telegram to AI response and enable the workflow for production use.
- Click Test Workflow and send a message to your Telegram bot.
- Confirm the execution path is Incoming Telegram Hook → Access Validation Filter → Research Orchestrator → Telegram Response Sender.
- Verify the response includes structured text and citations from Perplexity sources.
- Click Activate to run the workflow continuously.
Watch Out For
- Telegram credentials can expire or be misconfigured if you regenerate the bot token. If things break, check your Telegram bot token in n8n credentials first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About 30 minutes if you already have your API keys.
Yes. No coding required. You’ll mostly be pasting credentials and adding approved Telegram user IDs.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Perplexity and OpenAI API usage costs, which depend on how many questions you process.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can. Most customizations happen in the Research Orchestrator agent: change the system message to match your tone, adjust how citations are formatted, or bias the routing so Sonar Pro is used more often for high-stakes topics. If you want multiple approved users, update the Access Validation Filter to match a list of Telegram user IDs instead of a single ID. You can also swap the “quick vs deep” behavior to prioritize speed during busy hours and depth during planned research blocks.
Usually it’s the bot token. Regenerate it in BotFather, then update the Telegram credentials in n8n and re-test the Telegram Trigger. If the trigger works but messages don’t send back, check that the bot is allowed to message the chat (group permissions can block it). Also confirm your Access Validation Filter is matching your real Telegram user ID, because a mismatch looks like “nothing happens.”
If you self-host, there’s no execution limit (it depends on your server). On n8n Cloud, capacity depends on your plan’s monthly executions, and this workflow generally uses one execution per incoming Telegram message.
Often, yes, because this isn’t just a “send message to AI” automation. You’re combining an AI agent, tool routing (Sonar vs Sonar Pro), and session memory so follow-ups stay coherent, and that kind of logic gets awkward in most no-code task zappers. n8n also gives you a clean self-hosting path if you don’t want to pay per task forever. Zapier or Make can still be fine for simple one-off Q&A, especially if your team wants a very guided UI. Honestly, if you’re not sure, run this for a week and compare how many answers you can confidently forward without re-checking. Talk to an automation expert if you want help choosing.
You get cited answers where the work already happens: Telegram. Set it up once, and your next “quick question” won’t turn into a mini research project.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.