Telegram + Google Maps, instant taxi fare quotes
Taxi pricing questions look simple. Then you’re stuck in a Telegram back-and-forth, copying addresses, asking “pickup or drop-off?”, and trying not to quote the wrong number.
Dispatch teams feel it first. But ops managers and owners running a small fleet feel it too. This taxi fare quotes automation turns messy chat messages into consistent estimates, fast, without your staff doing math in their head.
You’ll see how the workflow reads a Telegram message, checks your service rules in Postgres, calculates distance with Google Maps, and hands off clean estimate data to providers.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Telegram + Google Maps, instant taxi fare quotes
flowchart LR
subgraph sg0["Flow Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Flow Trigger", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Input", pos: "b", h: 48 }
n2@{ icon: "mdi:play-circle", form: "rounded", label: "Test Trigger", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "Test Fields", pos: "b", h: 48 }
n4@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n5@{ icon: "mdi:memory", form: "rounded", label: "Postgres Chat Memory", pos: "b", h: 48 }
n6@{ icon: "mdi:swap-vertical", form: "rounded", label: "Output", pos: "b", h: 48 }
n7@{ icon: "mdi:database", form: "rounded", label: "Update User Session", pos: "b", h: 48 }
n8@{ icon: "mdi:brain", form: "rounded", label: "xAI @grok-2-1212", pos: "b", h: 48 }
n9@{ icon: "mdi:database", form: "rounded", label: "Load User Memory", pos: "b", h: 48 }
n10@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If Service Cache", pos: "b", h: 48 }
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Service Cache"]
n12["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/postgres.svg' width='40' height='40' /></div><br/>Load Service Data"]
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Save Service Cache"]
n14@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If Active", pos: "b", h: 48 }
n15@{ icon: "mdi:swap-vertical", form: "rounded", label: "Inactive Output", pos: "b", h: 48 }
n16@{ icon: "mdi:swap-vertical", form: "rounded", label: "Service", pos: "b", h: 48 }
n17@{ icon: "mdi:swap-vertical", form: "rounded", label: "Error Output", pos: "b", h: 48 }
n18@{ icon: "mdi:database", form: "rounded", label: "Save User Memory", pos: "b", h: 48 }
n19@{ icon: "mdi:database", form: "rounded", label: "Create Route Data", pos: "b", h: 48 }
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Delete Route Data"]
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Route Data"]
n22@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If Route Data", pos: "b", h: 48 }
n23["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse Route"]
n24["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse Service"]
n25@{ icon: "mdi:web", form: "rounded", label: "Find Route Distance", pos: "b", h: 48 }
n26["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Reset Session"]
n27@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Switch", pos: "b", h: 48 }
n28@{ icon: "mdi:swap-vertical", form: "rounded", label: "English", pos: "b", h: 48 }
n29@{ icon: "mdi:swap-vertical", form: "rounded", label: "Chinese", pos: "b", h: 48 }
n30@{ icon: "mdi:swap-vertical", form: "rounded", label: "Japanese", pos: "b", h: 48 }
n31@{ icon: "mdi:cog", form: "rounded", label: "Call Back", pos: "b", h: 48 }
n32@{ icon: "mdi:cog", form: "rounded", label: "Taxi Service Provider", pos: "b", h: 48 }
n1 --> n11
n6 --> n31
n27 --> n29
n27 --> n30
n27 --> n28
n29 --> n31
n28 --> n31
n16 --> n26
n4 --> n21
n4 --> n17
n30 --> n31
n14 --> n13
n14 --> n15
n21 --> n22
n23 --> n32
n23 --> n27
n3 --> n1
n17 --> n31
n0 --> n1
n2 --> n3
n22 --> n23
n22 --> n6
n24 --> n16
n26 --> n20
n11 --> n10
n15 --> n31
n10 --> n24
n10 --> n12
n9 -.-> n4
n18 -.-> n4
n8 -.-> n4
n19 -.-> n4
n20 --> n4
n12 --> n14
n13 --> n16
n25 -.-> n4
n7 -.-> n4
n5 -.-> n4
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n2 trigger
class n4 ai
class n8 aiModel
class n5 ai
class n10,n14,n22,n27 decision
class n7,n9,n11,n12,n13,n18,n19,n20,n21,n26 database
class n25 api
class n23,n24 code
classDef customIcon fill:none,stroke:none
class n11,n12,n13,n20,n21,n23,n24,n26 customIcon
The Problem: Fare Quotes Turn Into Endless Chat
A customer asks, “How much from the airport to downtown?” and you think it’ll take 30 seconds. It doesn’t. Someone has to confirm addresses, check if the service is active for that area, estimate distance, factor in provider rules, then type a reply that won’t trigger another five questions. Do that all day and your “quick answers” become a second job. Worse, quotes get inconsistent across agents, which leads to arguments at pickup and awkward refunds later.
The friction compounds. Here’s where it breaks down in real life.
- Agents retype the same clarifying questions, so response time drifts from seconds to several minutes.
- Manual distance checks in Google Maps get skipped when it’s busy, and that’s how bad quotes happen.
- Service rules live in someone’s head (or a spreadsheet), which means new staff guess.
- Provider handoffs are messy, so the “quote” never turns into a booked ride.
The Solution: Telegram → Service Rules → Route Distance → Quote Handoff
This n8n workflow acts like a quote coordinator for your taxi chat. A message comes in from Telegram (directly, or from a separate call center workflow), and the automation immediately checks your service configuration in PostgreSQL to confirm it exists and is active. It then resets session data in Redis, clears any old route info, and asks an AI Agent to turn the customer’s message into structured “route data” (pickup, drop-off, language, and any context). Once the route is clear, the workflow calls the Google Maps Route API to calculate distance. Finally, it formats a customer-friendly reply (including multi-language responses if you want that) and passes the estimate payload to one or more provider sub-workflows for pricing and dispatch.
The workflow starts with a chat trigger and a service lookup. Then it builds route data with the AI Agent, uses Google Maps to compute distance, and hands clean details to your provider workflows. No more improvising while a customer waits.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team handles 30 fare requests a day in Telegram. Manually, a “simple quote” often takes about 6 minutes: 2 minutes clarifying locations, 2 minutes checking Maps, 2 minutes typing and handing off to a provider. That’s roughly 3 hours daily. With this workflow, an agent can just let the chat trigger run, wait for the Google Maps distance lookup and provider workflow to return, then send the final message, which is more like 1 minute of attention per request. You get about 2 hours back most days, and replies stay consistent even when it’s hectic.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Telegram to receive and reply to quote requests
- Postgres to store services, rules, and memory
- Redis for fast session and route caching
- Google Maps API key (get it from Google Cloud Console)
- OpenAI or xAI model access (get an API key from your model provider)
Skill level: Advanced. You’ll connect several credentials, edit prompts, and configure sub-workflows for your providers.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A Telegram chat message kicks things off. The workflow can start from the built-in chat trigger, or from an “Execute Sub-workflow Trigger” if you already have a separate call center workflow upstream.
Your service rules are loaded and validated. n8n checks Redis first (fast), then PostgreSQL if needed, to find the matching service record. If the service is missing or inactive, it immediately returns a clean error response instead of wasting anyone’s time.
An AI Agent turns chat into route data. The automation clears old route cache, then uses the AI Agent plus chat memory (Postgres conversation memory and optional user memory) to interpret what the customer meant. If the message is incomplete, the agent can guide the conversation until it has pickup and drop-off details.
Google Maps calculates the distance and the workflow hands off pricing. An HTTP request calls the Google Map Route API, then the workflow formats language-specific outputs (English, Chinese, Japanese) and sends a structured payload to your provider sub-workflows for the final estimate and dispatch logic.
You can easily modify the service lookup rules to support multiple brands or regions based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Workflow Start Trigger
This workflow can start from a manual parent workflow call or a chat webhook. Set up both entry points so the automation can be triggered in production and during testing.
- Add and configure Workflow Start Trigger to allow execution from other workflows.
- Configure Chat Webhook Trigger to receive inbound chat requests for live conversations.
- Connect Chat Webhook Trigger to Prepare Test Fields so chat payloads are normalized before processing.
- Connect Workflow Start Trigger to Assign Input Fields to normalize inputs from the parent workflow.
Step 2: Connect Redis and Postgres Data Sources
This workflow uses Redis for caching and Postgres for service records and memory. Configure credentials for all Redis and Postgres nodes to avoid runtime failures.
- In Read Service Cache, Write Service Cache, Read Route Cache, Remove Route Cache, and Reset Session Cache, add Redis connections.
- In Fetch Service Records, add a Postgres connection for your service database.
- In Retrieve User Memory and Store User Memory, add Postgres connections for memory storage.
- Ensure all Redis tool nodes (Refresh Session Cache and Create Route Cache) are configured with Redis access.
Step 3: Configure Cache Checks and Service Data Assembly
These nodes determine whether data is fetched from cache or database, and how service data is assembled for downstream processing.
- Confirm Assign Input Fields outputs to Read Service Cache for initial cache lookup.
- Configure Service Cache Check to branch between Transform Service Payload (cache hit) and Fetch Service Records (cache miss).
- Set up Active Status Check to route active services to Write Service Cache and inactive services to Compose Inactive Response.
- In Transform Service Payload and Assemble Service Data, normalize and combine service fields for downstream AI and routing logic.
- Connect Assemble Service Data to Reset Session Cache and then to Remove Route Cache to clear stale route data.
Step 4: Set Up the AI Coordination Agent and Memory
The AI pipeline orchestrates route lookup, memory retrieval, and session cache refresh. All AI tools and memory nodes are attached to the parent agent.
- Open AI Coordination Agent and attach xAI Grok Chat Model as the language model.
- Attach Postgres Conversation Memory as the memory module for persistent chat context.
- Attach tools: Retrieve User Memory, Store User Memory, Create Route Cache, Refresh Session Cache, and Lookup Route Distance to AI Coordination Agent.
- Confirm Remove Route Cache feeds into AI Coordination Agent to reset context between service requests.
Step 5: Configure Route Cache Logic and Parallel Branching
The route cache controls whether the system uses cached route data or generates new data, then branches into response localization and a secondary sub-workflow.
- Ensure AI Coordination Agent outputs to Read Route Cache, then to Route Cache Check for cache evaluation.
- Connect Route Cache Check to Transform Route Payload for cache miss, and to Set Final Output for cache hit.
- In Transform Route Payload, build the payload that will be used for response localization and the secondary sub-workflow.
- Transform Route Payload outputs to both Run Sub-Workflow (Configure Required) 2 and Route Language Switch in parallel.
Step 6: Configure Language Routing and Sub-Workflows
Language-specific responses are generated and routed to a separate sub-workflow for final delivery or formatting.
- Configure Route Language Switch to branch into Chinese Response, Japanese Response, and English Response.
- Ensure each response node (Chinese Response, Japanese Response, English Response) connects to Run Sub-Workflow (Configure Required).
- Connect Compose Error Reply and Compose Inactive Response to Run Sub-Workflow (Configure Required) for unified output handling.
- Configure Run Sub-Workflow (Configure Required) and Run Sub-Workflow (Configure Required) 2 to point to the appropriate child workflows.
Step 7: Test and Activate Your Workflow
Validate each path—cache hit, cache miss, inactive status, and multilingual response—before activating the workflow.
- Use Chat Webhook Trigger to run a manual test and confirm data flows through Prepare Test Fields and Assign Input Fields.
- Verify that Service Cache Check and Route Cache Check route correctly based on cache state.
- Confirm that AI Coordination Agent returns outputs and that memory tools are called successfully.
- Look for successful delivery through Run Sub-Workflow (Configure Required) and Run Sub-Workflow (Configure Required) 2.
- Once tests pass, toggle the workflow to Active for production use.
Common Gotchas
- Telegram bot permissions can block replies in groups or channels. If messages arrive but you can’t respond, check your bot settings and chat permissions first.
- If you’re using Wait nodes or external processing (like provider sub-workflows), timing varies. Bump up the wait duration if downstream nodes fail on empty responses.
- Google Maps API calls fail quietly when billing isn’t enabled or the Routes API isn’t allowed. Check your Google Cloud API restrictions and quotas before you debug n8n.
Frequently Asked Questions
Plan on about 1–2 hours if your database and API keys are ready.
No, but you will need to be comfortable editing prompts and mapping fields between nodes. If you want to change the Code nodes, that part is more technical.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Google Maps API usage and your AI model costs.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes. This template already expects a “service” record in Postgres (the sys_service table by default), so you can add rows per city, vehicle class, or pricing policy. You’ll typically adjust the service lookup logic, update the AI Agent prompt that creates route data, and tweak the provider sub-workflow that computes final pricing. Multi-language is already scaffolded with a language switch and separate response nodes.
Usually it’s the bot not being allowed to read messages in that chat, or the wrong webhook/trigger configuration inside Telegram. Double-check the bot token in n8n credentials, then confirm the chat you’re testing in actually allows the bot to receive and send messages. If it works in private chat but not groups, permissions are the culprit most of the time.
If you self-host in queue mode (which this template is designed for), it can handle a lot as long as your server and APIs keep up. On n8n Cloud, your limit is based on monthly executions, so high-volume dispatch teams usually move to a higher tier. In practice, the Google Maps call is the main bottleneck, so watch quotas and rate limits as volume grows.
For this use case, yes, because you’re combining chat memory, caching with Redis, branching logic, and sub-workflows for providers in one place. Zapier and Make can do parts of it, but the moment you need “keep asking until route data is complete” plus database-driven rules, it gets awkward and pricey. n8n also gives you the self-hosting option, which matters if you expect lots of messages. That said, if you only want a two-step “message in, message out” bot, simpler tools can be fine. Talk to an automation expert if you want a quick recommendation.
Set this up once, and fare quotes stop being a distraction. Your team replies faster, customers get clearer answers, and providers receive clean handoffs they can actually act on.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.