🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

Telegram + Google Maps, instant taxi fare quotes

Lisa Granqvist Partner Workflow Automation Expert

Taxi pricing questions look simple. Then you’re stuck in a Telegram back-and-forth, copying addresses, asking “pickup or drop-off?”, and trying not to quote the wrong number.

Dispatch teams feel it first. But ops managers and owners running a small fleet feel it too. This taxi fare quotes automation turns messy chat messages into consistent estimates, fast, without your staff doing math in their head.

You’ll see how the workflow reads a Telegram message, checks your service rules in Postgres, calculates distance with Google Maps, and hands off clean estimate data to providers.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: Telegram + Google Maps, instant taxi fare quotes

The Problem: Fare Quotes Turn Into Endless Chat

A customer asks, “How much from the airport to downtown?” and you think it’ll take 30 seconds. It doesn’t. Someone has to confirm addresses, check if the service is active for that area, estimate distance, factor in provider rules, then type a reply that won’t trigger another five questions. Do that all day and your “quick answers” become a second job. Worse, quotes get inconsistent across agents, which leads to arguments at pickup and awkward refunds later.

The friction compounds. Here’s where it breaks down in real life.

  • Agents retype the same clarifying questions, so response time drifts from seconds to several minutes.
  • Manual distance checks in Google Maps get skipped when it’s busy, and that’s how bad quotes happen.
  • Service rules live in someone’s head (or a spreadsheet), which means new staff guess.
  • Provider handoffs are messy, so the “quote” never turns into a booked ride.

The Solution: Telegram → Service Rules → Route Distance → Quote Handoff

This n8n workflow acts like a quote coordinator for your taxi chat. A message comes in from Telegram (directly, or from a separate call center workflow), and the automation immediately checks your service configuration in PostgreSQL to confirm it exists and is active. It then resets session data in Redis, clears any old route info, and asks an AI Agent to turn the customer’s message into structured “route data” (pickup, drop-off, language, and any context). Once the route is clear, the workflow calls the Google Maps Route API to calculate distance. Finally, it formats a customer-friendly reply (including multi-language responses if you want that) and passes the estimate payload to one or more provider sub-workflows for pricing and dispatch.

The workflow starts with a chat trigger and a service lookup. Then it builds route data with the AI Agent, uses Google Maps to compute distance, and hands clean details to your provider workflows. No more improvising while a customer waits.

What You Get: Automation vs. Results

Example: What This Looks Like

Say your team handles 30 fare requests a day in Telegram. Manually, a “simple quote” often takes about 6 minutes: 2 minutes clarifying locations, 2 minutes checking Maps, 2 minutes typing and handing off to a provider. That’s roughly 3 hours daily. With this workflow, an agent can just let the chat trigger run, wait for the Google Maps distance lookup and provider workflow to return, then send the final message, which is more like 1 minute of attention per request. You get about 2 hours back most days, and replies stay consistent even when it’s hectic.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • Telegram to receive and reply to quote requests
  • Postgres to store services, rules, and memory
  • Redis for fast session and route caching
  • Google Maps API key (get it from Google Cloud Console)
  • OpenAI or xAI model access (get an API key from your model provider)

Skill level: Advanced. You’ll connect several credentials, edit prompts, and configure sub-workflows for your providers.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

A Telegram chat message kicks things off. The workflow can start from the built-in chat trigger, or from an “Execute Sub-workflow Trigger” if you already have a separate call center workflow upstream.

Your service rules are loaded and validated. n8n checks Redis first (fast), then PostgreSQL if needed, to find the matching service record. If the service is missing or inactive, it immediately returns a clean error response instead of wasting anyone’s time.

An AI Agent turns chat into route data. The automation clears old route cache, then uses the AI Agent plus chat memory (Postgres conversation memory and optional user memory) to interpret what the customer meant. If the message is incomplete, the agent can guide the conversation until it has pickup and drop-off details.

Google Maps calculates the distance and the workflow hands off pricing. An HTTP request calls the Google Map Route API, then the workflow formats language-specific outputs (English, Chinese, Japanese) and sends a structured payload to your provider sub-workflows for the final estimate and dispatch logic.

You can easily modify the service lookup rules to support multiple brands or regions based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Workflow Start Trigger

This workflow can start from a manual parent workflow call or a chat webhook. Set up both entry points so the automation can be triggered in production and during testing.

  1. Add and configure Workflow Start Trigger to allow execution from other workflows.
  2. Configure Chat Webhook Trigger to receive inbound chat requests for live conversations.
  3. Connect Chat Webhook Trigger to Prepare Test Fields so chat payloads are normalized before processing.
  4. Connect Workflow Start Trigger to Assign Input Fields to normalize inputs from the parent workflow.

Use Prepare Test Fields to simulate live chat payloads when you need to test the flow without calling a webhook.

Step 2: Connect Redis and Postgres Data Sources

This workflow uses Redis for caching and Postgres for service records and memory. Configure credentials for all Redis and Postgres nodes to avoid runtime failures.

  1. In Read Service Cache, Write Service Cache, Read Route Cache, Remove Route Cache, and Reset Session Cache, add Redis connections.
  2. In Fetch Service Records, add a Postgres connection for your service database.
  3. In Retrieve User Memory and Store User Memory, add Postgres connections for memory storage.
  4. Ensure all Redis tool nodes (Refresh Session Cache and Create Route Cache) are configured with Redis access.

Credential Required: Connect your Redis credentials to all Redis and Redis tool nodes. Credential Required: Connect your Postgres credentials to Fetch Service Records, Retrieve User Memory, Store User Memory, and Postgres Conversation Memory (added via the parent agent).

Step 3: Configure Cache Checks and Service Data Assembly

These nodes determine whether data is fetched from cache or database, and how service data is assembled for downstream processing.

  1. Confirm Assign Input Fields outputs to Read Service Cache for initial cache lookup.
  2. Configure Service Cache Check to branch between Transform Service Payload (cache hit) and Fetch Service Records (cache miss).
  3. Set up Active Status Check to route active services to Write Service Cache and inactive services to Compose Inactive Response.
  4. In Transform Service Payload and Assemble Service Data, normalize and combine service fields for downstream AI and routing logic.
  5. Connect Assemble Service Data to Reset Session Cache and then to Remove Route Cache to clear stale route data.

Because there are 9+ set nodes and 5+ Redis nodes, keep field mappings consistent across Assign Input Fields, Prepare Test Fields, Assemble Service Data, and the response nodes.

Step 4: Set Up the AI Coordination Agent and Memory

The AI pipeline orchestrates route lookup, memory retrieval, and session cache refresh. All AI tools and memory nodes are attached to the parent agent.

  1. Open AI Coordination Agent and attach xAI Grok Chat Model as the language model.
  2. Attach Postgres Conversation Memory as the memory module for persistent chat context.
  3. Attach tools: Retrieve User Memory, Store User Memory, Create Route Cache, Refresh Session Cache, and Lookup Route Distance to AI Coordination Agent.
  4. Confirm Remove Route Cache feeds into AI Coordination Agent to reset context between service requests.

Credential Required: Connect your xAI Grok credentials to xAI Grok Chat Model. Credential Required: Connect your Postgres credentials for Postgres Conversation Memory, and your Redis credentials for Create Route Cache and Refresh Session Cache. For AI tool and memory sub-nodes, add credentials on the parent AI Coordination Agent, not on the sub-node itself.

Step 5: Configure Route Cache Logic and Parallel Branching

The route cache controls whether the system uses cached route data or generates new data, then branches into response localization and a secondary sub-workflow.

  1. Ensure AI Coordination Agent outputs to Read Route Cache, then to Route Cache Check for cache evaluation.
  2. Connect Route Cache Check to Transform Route Payload for cache miss, and to Set Final Output for cache hit.
  3. In Transform Route Payload, build the payload that will be used for response localization and the secondary sub-workflow.
  4. Transform Route Payload outputs to both Run Sub-Workflow (Configure Required) 2 and Route Language Switch in parallel.

⚠️ Common Pitfall: If Route Cache Check does not properly detect cache hits, the workflow may skip Set Final Output and keep regenerating route data.

Step 6: Configure Language Routing and Sub-Workflows

Language-specific responses are generated and routed to a separate sub-workflow for final delivery or formatting.

  1. Configure Route Language Switch to branch into Chinese Response, Japanese Response, and English Response.
  2. Ensure each response node (Chinese Response, Japanese Response, English Response) connects to Run Sub-Workflow (Configure Required).
  3. Connect Compose Error Reply and Compose Inactive Response to Run Sub-Workflow (Configure Required) for unified output handling.
  4. Configure Run Sub-Workflow (Configure Required) and Run Sub-Workflow (Configure Required) 2 to point to the appropriate child workflows.

Use Set Final Output as the single standardized output to your downstream workflow when route cache is valid.

Step 7: Test and Activate Your Workflow

Validate each path—cache hit, cache miss, inactive status, and multilingual response—before activating the workflow.

  1. Use Chat Webhook Trigger to run a manual test and confirm data flows through Prepare Test Fields and Assign Input Fields.
  2. Verify that Service Cache Check and Route Cache Check route correctly based on cache state.
  3. Confirm that AI Coordination Agent returns outputs and that memory tools are called successfully.
  4. Look for successful delivery through Run Sub-Workflow (Configure Required) and Run Sub-Workflow (Configure Required) 2.
  5. Once tests pass, toggle the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • Telegram bot permissions can block replies in groups or channels. If messages arrive but you can’t respond, check your bot settings and chat permissions first.
  • If you’re using Wait nodes or external processing (like provider sub-workflows), timing varies. Bump up the wait duration if downstream nodes fail on empty responses.
  • Google Maps API calls fail quietly when billing isn’t enabled or the Routes API isn’t allowed. Check your Google Cloud API restrictions and quotas before you debug n8n.

Frequently Asked Questions

How long does it take to set up this taxi fare quotes automation?

Plan on about 1–2 hours if your database and API keys are ready.

Do I need coding skills to automate taxi fare quotes?

No, but you will need to be comfortable editing prompts and mapping fields between nodes. If you want to change the Code nodes, that part is more technical.

Is n8n free to use for this taxi fare quotes automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Google Maps API usage and your AI model costs.

Where can I host n8n to run this taxi fare quotes automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this taxi fare quotes automation workflow for multiple cities or service types?

Yes. This template already expects a “service” record in Postgres (the sys_service table by default), so you can add rows per city, vehicle class, or pricing policy. You’ll typically adjust the service lookup logic, update the AI Agent prompt that creates route data, and tweak the provider sub-workflow that computes final pricing. Multi-language is already scaffolded with a language switch and separate response nodes.

Why is my Telegram connection failing in this taxi fare quotes automation?

Usually it’s the bot not being allowed to read messages in that chat, or the wrong webhook/trigger configuration inside Telegram. Double-check the bot token in n8n credentials, then confirm the chat you’re testing in actually allows the bot to receive and send messages. If it works in private chat but not groups, permissions are the culprit most of the time.

How many chat requests can this taxi fare quotes automation handle?

If you self-host in queue mode (which this template is designed for), it can handle a lot as long as your server and APIs keep up. On n8n Cloud, your limit is based on monthly executions, so high-volume dispatch teams usually move to a higher tier. In practice, the Google Maps call is the main bottleneck, so watch quotas and rate limits as volume grows.

Is this taxi fare quotes automation better than using Zapier or Make?

For this use case, yes, because you’re combining chat memory, caching with Redis, branching logic, and sub-workflows for providers in one place. Zapier and Make can do parts of it, but the moment you need “keep asking until route data is complete” plus database-driven rules, it gets awkward and pricey. n8n also gives you the self-hosting option, which matters if you expect lots of messages. That said, if you only want a two-step “message in, message out” bot, simpler tools can be fine. Talk to an automation expert if you want a quick recommendation.

Set this up once, and fare quotes stop being a distraction. Your team replies faster, customers get clearer answers, and providers receive clean handoffs they can actually act on.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal