Telegram + Supabase: instant answers from your docs
Your support inbox is full of repeat questions, but the answers live in ten different places. Someone pings you in Telegram, then you’re hunting PDFs, scrolling docs, and trying to stay consistent while you type the same reply again.
This hits support leads hardest, honestly. But founders and agency operators building bots for clients feel it too. With this Telegram docs answers automation, your bot can reply in seconds using your real documentation, even when users send screenshots, PDFs, or voice notes.
Below you’ll see how the workflow routes each message type, pulls the right context from Supabase, and sends on-brand answers back through Telegram.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: Telegram + Supabase: instant answers from your docs
flowchart LR
subgraph sg0["Telegram Message Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Telegram Message Trigger"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Send Telegram Reply"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch Voice File"]
n3@{ icon: "mdi:robot", form: "rounded", label: "Transcribe Audio", pos: "b", h: 48 }
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch PDF File"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch Image File"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Normalize Image Mime"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch Photo File"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Normalize Photo Mime"]
n9@{ icon: "mdi:robot", form: "rounded", label: "Analyze Photo Content", pos: "b", h: 48 }
n10@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Validate File Support", pos: "b", h: 48 }
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch Spreadsheet File"]
n12@{ icon: "mdi:cog", form: "rounded", label: "Parse PDF Text", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Parse Spreadsheet Data", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Send Typing Action"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Check Document Extensions"]
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Categorize Documents"]
n17@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Route Incoming Message", pos: "b", h: 48 }
n18@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Route Document Types", pos: "b", h: 48 }
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch JSON File"]
n20@{ icon: "mdi:cog", form: "rounded", label: "Parse JSON Content", pos: "b", h: 48 }
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch XML File"]
n22@{ icon: "mdi:cog", form: "rounded", label: "Parse XML Content", pos: "b", h: 48 }
n23["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Fetch Word File"]
n24@{ icon: "mdi:cog", form: "rounded", label: "Convert File to Base64", pos: "b", h: 48 }
n25["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Docx to Text API"]
n26["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Retrieve Text Output"]
n27@{ icon: "mdi:robot", form: "rounded", label: "Analyze Uploaded Image", pos: "b", h: 48 }
n32@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n34@{ icon: "mdi:vector-polygon", form: "rounded", label: "Search Embeddings", pos: "b", h: 48 }
n35@{ icon: "mdi:memory", form: "rounded", label: "Postgres Conversation Memory", pos: "b", h: 48 }
n36@{ icon: "mdi:robot", form: "rounded", label: "Cohere Reranker", pos: "b", h: 48 }
n37@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Text Field", pos: "b", h: 48 }
n38@{ icon: "mdi:swap-vertical", form: "rounded", label: "Compose Photo Text", pos: "b", h: 48 }
n39@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Error Message", pos: "b", h: 48 }
n40@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Spreadsheet Text", pos: "b", h: 48 }
n41@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map JSON Text", pos: "b", h: 48 }
n42@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map XML Text", pos: "b", h: 48 }
n43@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map PDF Text", pos: "b", h: 48 }
n44@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Doc Text", pos: "b", h: 48 }
n46@{ icon: "mdi:robot", form: "rounded", label: "Knowledge Base Assistant", pos: "b", h: 48 }
n47@{ icon: "mdi:cube-outline", form: "rounded", label: "Supabase Vector Search", pos: "b", h: 48 }
n10 --> n16
n10 --> n39
n4 --> n12
n21 --> n22
n37 --> n46
n6 --> n27
n19 --> n20
n26 --> n44
n40 --> n46
n41 --> n46
n42 --> n46
n43 --> n46
n44 --> n46
n8 --> n9
n9 --> n38
n2 --> n3
n5 --> n6
n7 --> n8
n24 --> n25
n27 --> n38
n18 --> n5
n18 --> n4
n18 --> n23
n18 --> n11
n18 --> n19
n18 --> n21
n36 --> n47
n12 --> n43
n22 --> n42
n0 --> n17
n0 --> n14
n20 --> n41
n3 --> n46
n34 -.-> n47
n32 -.-> n46
n11 --> n13
n17 --> n37
n17 --> n2
n17 --> n7
n17 --> n15
n35 -.-> n46
n39 --> n46
n23 --> n24
n38 --> n46
n16 --> n18
n46 --> n1
n13 --> n40
n47 -.-> n46
n15 --> n10
n25 --> n26
end
subgraph sg1["Manual Execution Start Flow"]
direction LR
n28@{ icon: "mdi:play-circle", form: "rounded", label: "Manual Execution Start", pos: "b", h: 48 }
n29@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Drive File", pos: "b", h: 48 }
n30@{ icon: "mdi:robot", form: "rounded", label: "Default Data Loader", pos: "b", h: 48 }
n31@{ icon: "mdi:robot", form: "rounded", label: "Recursive Text Splitter", pos: "b", h: 48 }
n33@{ icon: "mdi:vector-polygon", form: "rounded", label: "Generate Embeddings", pos: "b", h: 48 }
n45@{ icon: "mdi:cube-outline", form: "rounded", label: "Insert Into Supabase Vectors", pos: "b", h: 48 }
n29 --> n45
n33 -.-> n45
n30 -.-> n45
n31 -.-> n30
n28 --> n29
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n28 trigger
class n3,n9,n27,n36,n46,n30,n31 ai
class n32 aiModel
class n35 ai
class n47,n45 ai
class n34,n33 ai
class n10,n17,n18 decision
class n25,n26 api
class n6,n8,n15,n16 code
classDef customIcon fill:none,stroke:none
class n0,n1,n2,n4,n5,n6,n7,n8,n11,n14,n15,n16,n19,n21,n23,n25,n26 customIcon
Why This Matters: Support Answers That Don’t Drift
Support is one of those jobs where “just answer it quickly” sounds easy until you do it all day. The same pricing question shows up again. Someone sends a blurry screenshot of an error. Another person drops a PDF and says “what does section 4 mean?” You can respond, sure, but the mental load is brutal, and tiny inconsistencies creep in. One teammate quotes an old policy. Another uses the wrong link. A week later you’ve got a mess: confused customers, longer threads, and less time for the work that actually moves the business.
It adds up fast. Here’s where the friction usually shows up.
- Answers change depending on who’s replying, which quietly erodes trust over time.
- Files get ignored because opening, reading, and summarizing them takes too long during busy hours.
- Voice notes and screenshots force you into “manual translation” mode before you can even start helping.
- Even a decent chatbot fails once questions require context from your actual docs, policies, or knowledge base.
What You’ll Build: A Telegram Bot That Answers From Your Knowledge Base
This workflow turns Telegram into a practical support front door. A message comes in (text, voice, image, or document), and the workflow routes it to the right handler. Audio gets transcribed into text. Images and photos get analyzed so the bot can “read” what’s shown. Documents like PDFs, spreadsheets, JSON/XML, and Word files are parsed or converted into text. Then the workflow uses your Supabase vector database to retrieve relevant chunks of your documentation, combines that context with the user’s question, and asks OpenAI to generate a grounded reply. Finally, the answer is sent back to Telegram, in the tone you define.
The workflow starts with a Telegram trigger and a message router that separates each input type. Next, content is normalized into plain text and enriched with retrieved context from Supabase (plus conversation memory). The AI agent produces a consistent response, and Telegram delivers it immediately.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say your team handles about 20 Telegram questions per day, and roughly half include a file (PDF, screenshot, or voice note). Manually, it’s easy to spend about 10 minutes per question between opening attachments, searching docs, and crafting a clean reply, so that’s around 3 hours daily. With this workflow, you spend maybe 1 minute reading the question while the bot transcribes/extracts and pulls context from Supabase. You get back about 2 hours a day, without pushing customers to a clunky ticket portal.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Telegram for receiving messages and sending replies.
- Supabase to store embeddings and run vector search.
- OpenAI API key (get it from your OpenAI dashboard).
Skill level: Intermediate. You’ll connect a few credentials, paste API keys, and edit prompts, but you don’t need to write an app.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A Telegram message comes in. The Telegram trigger captures the user’s text, file, or voice note. The workflow also sends a “typing” action so the experience feels responsive while processing happens.
The message is routed and normalized. A Switch node routes by input type, then the workflow extracts usable text: voice is transcribed, PDFs and spreadsheets are parsed, Word files are converted, and images/photos are analyzed to produce a text description.
Your knowledge base is searched. The AI agent runs a retrieval step against Supabase vector search, often with a reranker involved, so only the most relevant chunks of documentation are used as context. Conversation memory can also be included so repeat users don’t have to restate everything.
An answer is generated and sent back. OpenAI produces a response grounded in the retrieved doc snippets, and the workflow posts the reply into the same Telegram chat thread.
You can easily modify the agent prompt to match your brand voice, escalation rules, and what sources it’s allowed to cite. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Telegram Trigger
This workflow begins when a Telegram message arrives and immediately runs two branches in parallel.
- Add and open Telegram Message Trigger.
- Credential Required: Connect your telegramApi credentials.
- Set Updates to
message. - Confirm parallel execution: Telegram Message Trigger outputs to both Route Incoming Message and Send Typing Action in parallel.
Step 2: Connect Telegram File Fetch and Routing
Messages are categorized into text, voice, photo, or document, then routed to the correct fetch nodes.
- Open Route Incoming Message and verify the four outputs: Text, Audio, Photo, and Document, each checking
{{$json.message.text}},{{$json.message.voice}},{{$json.message.photo}}, and{{$json.message.document}}. - Credential Required: Connect your telegramApi credentials to all Telegram file nodes (Fetch Voice File, Fetch Photo File, Fetch Image File, Fetch PDF File, Fetch Word File, Fetch Spreadsheet File, Fetch JSON File, Fetch XML File, and Send Typing Action).
- Set Send Typing Action → Operation to
sendChatActionand Chat ID to{{$json.message.chat.id}}. - In each fetch node, keep Resource set to
fileand ensure the correct File ID expression is used (for example,{{$json.message.document.file_id}}in document nodes).
Step 3: Set Up Document Validation and Type Routing
Document uploads are validated and categorized before file-specific parsing begins.
- Configure Check Document Extensions to keep the supported list in the code:
['.jpg', '.jpeg', '.png', '.webp', '.pdf', '.doc', '.docx', '.xls', '.xlsx', '.json', '.xml']. - In Validate File Support, ensure the conditions check
{{$json.is_supported}}is true and{{$json.reason}}equalssupported_file_type. - Confirm the success path routes to Categorize Documents and the failure path routes to Map Error Message.
- Review Categorize Documents to ensure it sets fileTypeCategory to
image,pdf,word document,spreadsheet,json, orxml file. - Confirm Route Document Types sends each category to the correct fetch node (e.g., Image → Fetch Image File).
Step 4: Set Up File Parsing and Normalization
Each file type is normalized and converted into text for the assistant.
- For images, keep Normalize Image Mime and Normalize Photo Mime code unchanged so the mimeType is set correctly before analysis.
- Verify Parse PDF Text uses Operation
pdf, Parse Spreadsheet Data usesxlsx, Parse JSON Content usesfromJson, and Parse XML Content usesxml. - Ensure Convert File to Base64 uses Operation
binaryToProperyprior to Docx to Text API. - In Docx to Text API, set URL to
https://v2.convertapi.com/convert/docx/to/txtand keep the JSON body expression intact. - Replace the Authorization header in Docx to Text API with your ConvertAPI token (currently
[CONFIGURE_YOUR_TOKEN]). - Set Retrieve Text Output → URL to
{{$json.Files[0].Url}}so the converted text is fetched.
[CONFIGURE_YOUR_TOKEN] with a valid token.Step 5: Set Up AI Processing and Knowledge Base
This step wires up the AI analysis, embeddings, vector storage, and the assistant agent.
- Credential Required: Connect your openAiApi credentials to Transcribe Audio, Analyze Photo Content, Analyze Uploaded Image, and OpenAI Chat Model.
- Set Analyze Photo Content text to
Describe the contents of this photo/imageand Analyze Uploaded Image text toDescribe the content of this image.. - For text mapping, keep the following nodes feeding Knowledge Base Assistant: Map Text Field, Map PDF Text, Map Doc Text, Map Spreadsheet Text, Map JSON Text, Map XML Text, and Compose Photo Text.
- In Compose Photo Text, keep the text value as
=Photo content: {{$json.content}} Photo caption: {{$('Telegram Message Trigger').item.json.message.caption}}. - Open Knowledge Base Assistant and set Text to
{{$json.text}}. - Connect AI sub-nodes to Knowledge Base Assistant and ensure credentials are added on the parent nodes: OpenAI Chat Model (language model), Postgres Conversation Memory (memory), and Supabase Vector Search (tool). Do not add credentials to these sub-nodes directly.
- Credential Required: Connect your postgres credentials to Postgres Conversation Memory and keep Session Key as
{{$('Telegram Message Trigger').item.json.message.chat.id}}. - Credential Required: Connect your cohereApi credentials to Cohere Reranker and your supabaseApi credentials to Supabase Vector Search and Insert Into Supabase Vectors.
Step 6: Connect the Knowledge Base Ingestion Path
The workflow includes a manual ingestion pipeline to load knowledge documents into Supabase vectors.
- Open Manual Execution Start to confirm this path runs only when manually executed.
- In Retrieve Drive File, set Operation to
downloadand replace File ID[YOUR_ID]with your knowledge base file. - Credential Required: Connect your googleDriveOAuth2Api credentials to Retrieve Drive File.
- In Default Data Loader, set Data Type to
binaryand Text Splitting Mode tocustom. - Confirm Recursive Text Splitter feeds Default Data Loader, and Generate Embeddings connects to Insert Into Supabase Vectors.
Step 7: Configure Output Delivery
The assistant’s final output is sent back to the user in Telegram.
- Open Send Telegram Reply and set Text to
{{$json.output}}. - Set Chat ID to
{{$('Telegram Message Trigger').item.json.message.chat.id}}. - Credential Required: Connect your telegramApi credentials to Send Telegram Reply.
Step 8: Test and Activate Your Workflow
Validate each route, confirm AI responses, then enable the workflow for production use.
- Click Execute Workflow to test the manual ingestion path, then verify vectors are added through Insert Into Supabase Vectors.
- Send a Telegram message with text, voice, photo, and a supported document to ensure each branch reaches Knowledge Base Assistant.
- Confirm successful execution by receiving a Telegram response from Send Telegram Reply with relevant content.
- When tests pass, toggle the workflow to Active so Telegram Message Trigger runs in production.
Troubleshooting Tips
- Telegram bot tokens get rotated or pasted incorrectly more often than you’d think. If replies stop, check the Telegram credential in n8n first, then confirm the bot can still message your chat.
- If you’re using Wait-like timing (or external conversion services), processing times vary. Bump up the wait duration if downstream nodes fail on empty responses, especially for big PDFs or slow doc-to-text conversions.
- Supabase vector search looks “broken” when the embeddings don’t match the same model or chunking strategy. If results are irrelevant, re-check your embedding settings, chunk sizes, and that your knowledge-base ingestion workflow actually inserted new vectors.
Quick Answers
About 1–2 hours if your keys and Supabase project are ready.
No. You’ll mostly connect credentials, paste API keys, and edit prompts. There is some light configuration around file handling, but it’s not “build a backend” work.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI usage (typically a few cents per conversation) plus any document conversion or reranking calls you enable.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. You can swap what gets indexed by changing the Google Drive ingestion path (the manual “Retrieve Drive File” to “Insert Into Supabase Vectors” section), and you can reshape how the assistant behaves by editing the “Knowledge Base Assistant” agent prompt. Common tweaks include adding escalation rules (“hand off to a human”), restricting answers to specific collections, and changing the tone for sales vs. support.
Usually it’s an invalid or rotated bot token in your Telegram credentials. It can also be chat permissions (the bot isn’t allowed to message that chat), or you’re testing in a different Telegram thread than the trigger is listening to. If the trigger works but replies don’t, verify the “Send Telegram Reply” node is pointing to the correct chat ID from the incoming message.
If you self-host, there’s no execution cap, so it mostly depends on your server and API limits. On n8n Cloud, your plan sets the monthly execution allowance. Practically, this workflow handles support volume well, but large files and image analysis can slow throughput, so many teams start with “dozens per day” and scale up once prompts and indexing are dialed in.
For RAG-style bots with multiple file types, n8n is usually the easier long-term choice. You get more flexible routing (Switch/If logic), better control over how files are parsed, and a clear path to self-host for unlimited runs. Zapier or Make can work for simple “message in, message out” flows, but once you add embeddings, reranking, and document conversion, you’ll feel their limits. The other factor is cost: multi-step AI workflows can get expensive fast on per-task pricing. Talk to an automation expert if you want help choosing based on your volume.
Once this is live, your Telegram bot stops guessing and starts answering from the same docs your team trusts. Set it up, tune the prompt, and enjoy the quiet.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.