OpenAI + Telegram, faster airline support replies
Your Telegram inbox fills up fast. One passenger asks about baggage limits, another wants a refund, someone else is panicking about a visa, and suddenly your “quick replies” turn into copy-paste chaos.
This is where OpenAI Telegram automation pays off. Support leads get consistency back, ops managers stop firefighting the same questions, and small airline teams can finally answer faster without hiring another shift.
This workflow classifies each question, pulls the right policy context, drafts a clear answer, checks satisfaction, escalates unhappy cases, and logs everything for follow-up and reporting.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: OpenAI + Telegram, faster airline support replies
flowchart LR
subgraph sg0["Flow 1"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook - User Question"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract & Clean Question"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Classify Question Category"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse Category Result"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Fetch Knowledge Base Context"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Generate AI Answer"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Format Final Response"]
n7@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Check User Satisfaction", pos: "b", h: 48 }
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Log Satisfied User"]
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Offer Human Support"]
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Satisfaction Paths"]
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Log Interaction to Database"]
n12["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Send Response to User"]
n5 --> n6
n8 --> n10
n9 --> n10
n6 --> n7
n3 --> n4
n7 --> n8
n7 --> n9
n0 --> n1
n1 --> n2
n10 --> n11
n2 --> n3
n11 --> n12
n4 --> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n7 decision
class n0,n2,n5,n11,n12 api
class n1,n3,n4,n6,n8,n9 code
classDef customIcon fill:none,stroke:none
class n0,n1,n2,n3,n4,n5,n6,n8,n9,n10,n11,n12 customIcon
The Problem: Airline support replies are repetitive and high-risk
Airline and travel support has a nasty mix of “same question, different passenger” and “one wrong answer causes a real problem.” Baggage rules change by route and fare, refunds depend on policy language, and visa guidance needs careful wording. When agents answer manually in Telegram, they bounce between docs, old chats, spreadsheets, and tribal knowledge. It’s slow. Worse, replies drift over time, so passengers get different answers depending on who’s on shift. That inconsistency comes back as complaints, chargebacks, and escalations that could have been avoided.
The friction compounds. Here’s where it breaks down most often:
- Agents waste about 10 minutes per message hunting the right policy or template.
- Refund and baggage answers get paraphrased, which means policy details subtly change.
- Unhappy passengers aren’t flagged early, so the same chat spirals for days.
- Without clean logging, you can’t prove what was said or improve your FAQ coverage.
The Solution: Classify, answer, verify tone, then escalate and log
This n8n workflow turns Telegram into a structured support intake and response system. A passenger message comes in through a webhook-connected chat flow, then the text is cleaned up so the system is working with a clear question (not emojis, signatures, and extra noise). Next, the workflow classifies the inquiry into a category such as baggage, refunds, visas, bookings, or general travel info. Based on that category, it retrieves the right verified context (your FAQ, policy snippets, or knowledge base content), and then OpenAI generates a customer-facing reply that stays on-policy. After the answer is delivered, the workflow asks for satisfaction feedback, escalates unhappy cases to a human channel, and finally logs the whole interaction for analytics and auditing.
The workflow starts with a Telegram/web chat question coming into n8n. Then it categorizes the request, pulls the best matching policy context, and drafts a structured reply. Finally, it checks satisfaction, routes unhappy chats to a human, and writes everything to your logging system so nothing disappears.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team handles about 30 Telegram questions per day. Manually, if each one takes around 10 minutes to classify, look up baggage/refund/visa rules, and write a careful response, that’s roughly 5 hours of agent time daily. With this workflow, the agent’s role becomes “monitor exceptions”: the message comes in instantly, the AI reply is typically ready in about a minute, and only unhappy cases get kicked to a human. You often get back about 3 hours a day, and the replies stop drifting between shifts.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Telegram to receive and respond to passenger questions
- OpenAI to classify questions and draft replies
- OpenAI API key (get it from the OpenAI API dashboard)
Skill level: Intermediate. You’ll connect credentials, paste API keys, and map a few fields for your knowledge source and logging endpoint.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A passenger message hits your webhook. The workflow receives the question from Telegram (or any chat channel you connect) and captures the raw text and sender metadata.
The question is cleaned up for accuracy. Normalization removes fluff and formats the message so categorization and answering are based on the actual request, not noise.
OpenAI classifies the topic, then context is retrieved. The automation assigns a category like baggage, refunds, visas, bookings, or general info, and then pulls the most relevant policy/FAQ content so the answer stays grounded.
A structured reply is generated and delivered, then satisfaction is handled. The workflow composes the final response payload, sends it back to the chat, asks if the user is happy, and escalates to human support when they aren’t.
You can easily modify the knowledge source to match your airline’s policies and your escalation route (email, Slack, CRM) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
Set up the inbound webhook so your app can send travel questions into the workflow.
- Add the Incoming Query Webhook node and set Path to
airlines-faq. - Set HTTP Method to
POST. - Set Response Mode to
responseNodeso the workflow replies via Return Reply to Client. - Optionally keep Flowpast Branding as a visual note for documentation.
body payload.
Step 2: Connect OpenAI Requests for Classification and Response
These HTTP requests call OpenAI to classify the question and generate the final answer.
- In Categorize Inquiry, set URL to
https://api.openai.com/v1/chat/completionsand keep Send Body and Send Headers enabled. - Set Body Parameters for Categorize Inquiry:
- model =
gpt-3.5-turbo - messages =
=[{"role": "system", "content": "You are a travel question classifier. Classify the following travel question into ONE of these categories: DESTINATIONS, PACKAGES, VISA, TRANSPORT, HOTELS, ACTIVITIES, BOOKING, CANCELLATION, BAGGAGE, GENERAL. Respond with only the category name in uppercase."}, {"role": "user", "content": "{{ $json.userQuestion }}"}] - temperature =
0.2 - max_tokens =
20
- model =
- In Compose AI Response, set URL to
https://api.openai.com/v1/chat/completionswith Send Body and Send Headers enabled. - Set Body Parameters for Compose AI Response:
- model =
gpt-4 - messages =
=[{"role": "system", "content": "You are a helpful and friendly airlines travel assistant. Answer travel questions accurately based on the provided context. Be concise but informative. Use a warm, professional tone. If you don't have specific information, provide general guidance and suggest contacting customer support for details. Always prioritize customer satisfaction and safety.\n\nContext Information:\n{{ $json.knowledgeContext }}"}, {"role": "user", "content": "{{ $json.userQuestion }}"}] - temperature =
0.7 - max_tokens =
500
- model =
Authorization: Bearer YOUR_KEY). Add this in Header Parameters or via HTTP Request credentials.
Step 3: Set Up Question Normalization and Category Context
The code nodes clean input, interpret the classifier output, and inject the right FAQ context.
- In Normalize User Question, keep the provided JavaScript to extract
question,user_id, andsession_idfrom the webhook payload. - Connect Normalize User Question to Categorize Inquiry to pass
userQuestionand metadata to the classifier. - In Read Category Output, keep the JavaScript that reads
$input.item.json.choices[0].message.contentand references Normalize User Question with$('Normalize User Question').item.json. - In Retrieve FAQ Context, keep the knowledge base map and ensure it uses
const relevantContext = knowledgeBase[category] || knowledgeBase['GENERAL'];.
Step 4: Assemble the Reply and Route Satisfaction
This step generates the final payload, checks satisfaction keywords, and branches to the appropriate follow-up logging path.
- In Assemble Reply Payload, keep the JavaScript that builds
answer,relatedLinks, and usage fields based on Retrieve FAQ Context. - In Assess Satisfaction, set the condition to match the expression
={{ $json.userQuestion.toLowerCase() }}against the regexthank|thanks|helpful|great|perfect|excellent|satisfied. - Connect the true output of Assess Satisfaction to Record Happy User and the false output to Provide Human Assistance.
- Ensure both Record Happy User and Provide Human Assistance connect into Combine Satisfaction Routes with Mode set to
combine.
body.question, Normalize User Question may return undefined. Ensure your client sends question, message, or text.
Step 5: Configure Logging and Webhook Response
Log each interaction and return the final response to the original webhook caller.
- In Persist Interaction Log, set URL to
https://your-database-api.com/logs/interactionsand keep Send Body and Send Headers enabled. - Set the Body Parameters in Persist Interaction Log using expressions:
- userId =
={{ $json.userId }} - sessionId =
={{ $json.sessionId }} - question =
={{ $json.userQuestion }} - category =
={{ $json.category }} - answer =
={{ $json.answer }} - satisfaction =
={{ $json.satisfactionStatus }} - timestamp =
={{ $json.timestamp }} - tokensUsed =
={{ $json.tokensUsed }}
- userId =
- In Return Reply to Client, set Respond With to
jsonand keep Response Body as:={{ { "status": "success", "answer": $json.answer, "category": $json.category, "relatedLinks": $json.relatedLinks, "followUpMessage": $json.followUpMessage || null, "supportOptions": $json.supportOptions || null, "sessionId": $json.sessionId, "timestamp": $json.timestamp } }}
Step 6: Test and Activate Your Workflow
Verify the entire path from webhook intake through AI response and logging, then enable the workflow.
- Click Execute Workflow and send a test POST request to the Incoming Query Webhook URL with a JSON body containing
questionanduser_id. - Confirm the execution flow follows Incoming Query Webhook → Normalize User Question → Categorize Inquiry → Read Category Output → Retrieve FAQ Context → Compose AI Response → Assemble Reply Payload → Assess Satisfaction → Combine Satisfaction Routes → Persist Interaction Log → Return Reply to Client.
- Verify the response includes
answer,category, andrelatedLinks, and optionallyfollowUpMessageif routed through Provide Human Assistance. - Once confirmed, toggle the workflow to Active to handle live questions.
Common Gotchas
- Telegram bot credentials can expire or be mis-scoped. If things break, check your BotFather token and the n8n Telegram credentials field first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About an hour if your Telegram bot and OpenAI key are ready.
No. You’ll mostly connect accounts and paste in credentials. A tiny bit of comfort mapping fields helps, but you don’t need to write code.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which are usually a few cents per conversation depending on message length.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s one of the easiest changes. Keep the same logic around Assess Satisfaction, then swap the current escalation action in Provide Human Assistance for a Slack message node or your helpdesk/CRM action. Common customizations include tagging “urgent” cases (lost baggage, delays), adding language translation before the reply, and pulling context from a different knowledge source than your current FAQ.
Most of the time it’s the bot token or a changed webhook setup. Regenerate or re-check the Telegram bot token, then confirm the message is actually reaching the Incoming Query Webhook node in n8n. If you see webhook hits but no replies, double-check the reply payload formatting in Assemble Reply Payload. Rate limits can show up too if you blast tests repeatedly in a short window.
On n8n Cloud Starter, you’re typically fine for a small support inbox, and you can move up as volume grows. If you self-host, there’s no execution cap from n8n itself; capacity depends on your server and how heavy your knowledge retrieval step is. Practically, most teams start with a few hundred messages a day and scale from there once logging and escalation are stable.
Often, yes. This workflow has branching logic (satisfaction paths, category routing), logging calls, and tighter control over the “retrieve context then answer” pattern, which n8n handles comfortably without turning into a maze of zaps. n8n also gives you a self-host option, which is handy if your chat volume spikes or you need more control over data retention. Zapier or Make can still work if you’re doing something simple like “new message → send canned reply,” but that’s not what airline support needs, frankly. Talk to an automation expert if you want help choosing.
Once this is running, your team stops rewriting the same baggage and refund answers all day. You get cleaner logs, faster escalations, and a support inbox that feels manageable again.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.