Google Drive meets Telegram for instant doc answers
Your docs are “somewhere in Drive,” and somehow that still means you answer the same questions in Slack, email, and meetings. People waste time hunting, skim the wrong version, then ask you anyway because it’s faster than searching.
Marketing leads feel it when they’re trying to pull positioning details from old decks. Ops managers get it when policies live across folders. And agency owners deal with it when clients want “the latest” on repeat. This Drive Telegram answers setup turns Google Drive into a chat-based knowledge helper, so you get a clear answer (with citations) instead of another scavenger hunt.
You’ll see how the workflow watches a Drive folder, indexes new files, and replies to Telegram questions using the exact source passages it found.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Google Drive meets Telegram for instant doc answers
flowchart LR
subgraph sg0["File uploaded Flow"]
direction LR
n0@{ icon: "mdi:robot", form: "rounded", label: "Default Data Loader", pos: "b", h: 48 }
n1@{ icon: "mdi:cog", form: "rounded", label: "Download file", pos: "b", h: 48 }
n2@{ icon: "mdi:vector-polygon", form: "rounded", label: "Embedding model", pos: "b", h: 48 }
n3@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n4@{ icon: "mdi:brain", form: "rounded", label: "Model", pos: "b", h: 48 }
n5@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n6@{ icon: "mdi:play-circle", form: "rounded", label: "File uploaded", pos: "b", h: 48 }
n7@{ icon: "mdi:memory", form: "rounded", label: "Insert documents", pos: "b", h: 48 }
n8@{ icon: "mdi:memory", form: "rounded", label: "Retrieve documents", pos: "b", h: 48 }
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Listen for incoming events"]
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Telegram"]
n4 -.-> n3
n3 --> n10
n1 --> n7
n6 --> n1
n5 -.-> n3
n2 -.-> n7
n2 -.-> n8
n8 -.-> n3
n0 -.-> n7
n9 --> n3
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n6,n9 trigger
class n0,n3 ai
class n4 aiModel
class n5,n7,n8 ai
class n2 ai
classDef customIcon fill:none,stroke:none
class n9,n10 customIcon
The Challenge: Finding answers inside Drive shouldn’t be a job
Google Drive is great at storing documents. It’s not great at answering questions like “What’s our refund policy for annual plans?” or “Which slide has the newest pricing?” So someone opens three folders, searches by keyword, clicks five files, and still misses the paragraph that matters. Then the question comes back to you, even though the answer already exists. The real cost isn’t only the minutes spent searching. It’s the interruptions, the context switching, and the quiet loss of confidence when different people quote different versions.
It adds up fast, especially when the same questions repeat every week.
- People rely on memory and “tribal knowledge,” so answers change depending on who’s online.
- Searching Drive returns files, not decisions, and someone still has to read and interpret.
- Old docs linger in folders, which makes it easy to quote the wrong version in a client call.
- You lose momentum when every “quick question” turns into a 10-minute detour.
The Fix: A Telegram bot that answers from your Drive docs
This workflow turns a specific Google Drive folder into a living, chat-friendly knowledge base. When a new file is uploaded, n8n automatically fetches it, loads its contents, and converts it into “embeddings” (a way to represent meaning so similar ideas match even when wording differs). Those embeddings are stored in an in-memory vector store that the workflow can search later. Then, when someone asks a question in Telegram, the AI agent retrieves the most relevant passages from the stored docs, drafts a clear answer, and sends it back to Telegram with citations so you can see exactly where it came from. You stop being the human search engine, and your team gets faster, more consistent answers.
The workflow starts with a Drive upload trigger and quietly indexes new documents in the background. Questions arrive through Telegram, get routed to the knowledge agent, and return as a clean response that points back to the source text. It feels like chat, but it behaves like a searchable internal wiki.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say your team asks 15 “where is that” questions a week, and each one takes about 10 minutes of searching, skimming, and follow-ups. That’s roughly 2–3 hours gone, and it’s usually split across your most expensive people. With this workflow, the “manual work” becomes uploading new docs to one Drive folder, then asking in Telegram. The question still takes a minute to type, and the response comes back quickly with citations, so the time sink doesn’t spread across the whole team.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Drive to store and watch your documents.
- Telegram to ask questions and receive replies.
- OpenAI API key (get it from your OpenAI dashboard)
Skill level: Intermediate. You’ll connect accounts, choose a Drive folder, and test prompts, but you won’t be writing an app.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A file hits your Drive folder. The Google Drive Trigger watches a specific folder, so “adding knowledge” is as simple as uploading the latest PDF, doc, or internal note.
The document gets prepared for search. n8n fetches the file, loads the text (using a document loader), then sends it through OpenAI embeddings so the meaning is searchable, not just the keywords.
The knowledge store is updated. The workflow places those embeddings into an in-memory vector store, which acts like a fast index the AI agent can query when questions come in.
Questions arrive in Telegram and get answered with sources. A Telegram Trigger captures the message, the AI agent retrieves the most relevant passages from the vector store, and a Telegram node sends back an answer that includes citations to the original text.
You can easily modify the watched Drive folder to support separate teams (sales, marketing, ops) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Drive Upload Trigger
Set up the workflow to react when files are uploaded to Google Drive so new documents can be ingested into the knowledge store.
- Add and open Drive Upload Trigger.
- Credential Required: Connect your Google Drive credentials.
- Choose the target Drive or folder you want to monitor for uploads.
- Confirm Drive Upload Trigger connects to Fetch Drive File to pass uploaded file data.
Step 2: Connect Google Drive and Document Intake
Retrieve the newly uploaded file so it can be converted into documents for embedding.
- Open Fetch Drive File and select the operation that retrieves the file from Google Drive.
- Credential Required: Connect your Google Drive credentials for Fetch Drive File (separate from the trigger if needed).
- Ensure Fetch Drive File outputs to Store Documents as shown in the workflow flow.
Step 3: Set Up the Knowledge Store and Embeddings
Configure document loading and vector storage so uploaded files can be embedded and retrieved for question answering.
- Open Baseline Data Loader and confirm it is connected to Store Documents via the ai_document port.
- Open Embedding Engine and connect it to both Store Documents and Fetch Stored Docs via the ai_embedding ports.
- Credential Required: Connect your OpenAI credentials on Embedding Engine.
- Open Store Documents and ensure it receives from Fetch Drive File for file ingestion.
Step 4: Configure the Knowledge Agent and Telegram Input
Connect Telegram messages to the AI agent and set up the model and memory components that power responses.
- Open Telegram Event Listener and connect it to Knowledge Agent Core.
- Credential Required: Connect your Telegram credentials on Telegram Event Listener.
- Open Chat Model and set it as the language model for Knowledge Agent Core via the ai_languageModel connection.
- Credential Required: Connect your OpenAI credentials on Chat Model.
- Open Memory Buffer and connect it to Knowledge Agent Core through the ai_memory port.
- Open Fetch Stored Docs and confirm it is connected to Knowledge Agent Core via the ai_tool port.
Step 5: Configure the Telegram Response Output
Send the agent’s response back to the user in Telegram.
- Open Send Telegram Reply and confirm it receives data from Knowledge Agent Core.
- Credential Required: Connect your Telegram credentials for Send Telegram Reply.
- Map the outgoing message field to the agent output so responses are delivered to the original chat.
Step 6: Test and Activate Your Workflow
Validate the end-to-end flow for both document ingestion and Telegram Q&A before turning it on.
- Click Execute Workflow and upload a test file to the monitored Google Drive folder to verify Drive Upload Trigger → Fetch Drive File → Store Documents.
- Send a test message to your Telegram bot and confirm Telegram Event Listener triggers Knowledge Agent Core and returns a response via Send Telegram Reply.
- Successful execution looks like a stored document in memory and a Telegram reply with an answer referencing your uploaded content.
- Toggle the workflow to Active so it runs continuously in production.
Watch Out For
- Google Drive OAuth credentials can expire or need specific permissions. If things break, check the Google connection inside n8n’s Credentials and confirm the folder is shared with the connected account first.
- If you’re uploading big files or lots of files at once, embeddings and retrieval can take longer. Give the workflow breathing room and watch for timeouts if the next nodes run before the document is fully loaded.
- Default AI prompts are honestly too generic for real teams. Add your terminology and “how we answer” guidelines early, or you will keep editing the bot’s tone and level of detail.
Common Questions
About an hour if your Google Drive and Telegram bot are ready.
Yes, but someone needs to be comfortable connecting OAuth credentials and testing a few sample questions. No coding is required for the default workflow.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs for embeddings and the chat model (usually a few dollars a month for light internal use).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Start by swapping the watched Google Drive folder in the Drive Upload Trigger so each team indexes its own docs. Then adjust the Knowledge Agent Core instructions to match your rules (for example: “always answer in bullet points,” or “if unsure, ask a follow-up question”). If you want longer context, increase the Memory Buffer window so the agent remembers more of the conversation.
Usually it’s expired OAuth access or the connected account doesn’t have permission to the watched folder. Reconnect the Google Drive credential in n8n, then confirm the folder and the uploaded file are accessible to that same Google user. If it fails only on certain uploads, the file type may not be readable by the loader, so try a PDF or plain text export first.
If you self-host n8n, there’s no execution limit (it mostly depends on your server and OpenAI rate limits). On n8n Cloud, your monthly executions depend on your plan, which is enough for most small teams asking dozens of questions a day. One practical limit: this workflow uses an in-memory vector store, so extremely large doc libraries may feel heavy over time unless you move to a persistent store like Supabase.
For RAG-style workflows, usually yes. n8n handles branching logic, document loading, memory, and retrieval steps without turning the scenario into a fragile chain of paid tasks. Zapier and Make are fine for simple “send a message when a file is uploaded,” but this workflow needs retrieval, context, and citations, which is where n8n’s flexibility matters. Also, self-hosting is a big deal if you expect volume. If you’re torn, Talk to an automation expert and describe your doc volume and where your team prefers to ask questions.
Once this is running, “go find it in Drive” turns into “ask the bot” and move on. The workflow handles the repetitive lookups so your team can stay in flow.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.