Slack + Gmail: route requests to the right owner
Your Slack DMs are full of “quick questions,” Gmail has half-context threads, and the actual work gets stuck because nobody’s sure who owns what. You end up being the human router. It’s exhausting.
This Slack Gmail routing problem hits Ops leads first, but client-facing account managers and founders feel it too. Requests get answered late, or twice, or not at all. And you’re left cleaning up the handoff.
This n8n workflow fixes the “who handles this?” step automatically by reading the request, extracting what matters, and sending it to the right owner (or the right specialist agent). You’ll see how it works, what you need, and where teams usually trip up.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Slack + Gmail: route requests to the right owner
flowchart LR
subgraph sg0["Auto-fixing Output P Flow"]
direction LR
n0@{ icon: "mdi:robot", form: "rounded", label: "Auto-fixing Output Parser", pos: "b", h: 48 }
n1@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n2@{ icon: "mdi:cog", form: "rounded", label: "Reminder Agent", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Agent Route", pos: "b", h: 48 }
n4@{ icon: "mdi:brain", form: "rounded", label: "Output Parser Model", pos: "b", h: 48 }
n5@{ icon: "mdi:brain", form: "rounded", label: "GPT 4o Mini", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Reminder Agent Response"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Email Agent Response"]
n9@{ icon: "mdi:cog", form: "rounded", label: "Email Agent", pos: "b", h: 48 }
n10@{ icon: "mdi:memory", form: "rounded", label: "Postgres Chat Memory", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n12@{ icon: "mdi:cog", form: "rounded", label: "Meeting Agent", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Meeting Agent Response"]
n14@{ icon: "mdi:cog", form: "rounded", label: "Document Agent", pos: "b", h: 48 }
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Document Agent2"]
n6 --> n11
n11 --> n3
n3 --> n2
n3 --> n9
n3 --> n12
n3 --> n14
n9 --> n8
n5 -.-> n11
n12 --> n13
n14 --> n15
n2 --> n7
n4 -.-> n0
n10 -.-> n11
n1 -.-> n0
n0 -.-> n11
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n1,n11 ai
class n4,n5 aiModel
class n10 ai
class n3 decision
class n6,n7,n8,n13,n15 api
classDef customIcon fill:none,stroke:none
class n6,n7,n8,n13,n15 customIcon
The Challenge: Requests Don’t Come in “Pre-Sorted”
Most inbound requests don’t arrive neatly labeled as “support,” “billing,” “meeting,” or “document review.” They come as a rushed Slack message with missing details, or a Gmail email thread where the real ask is buried three replies down. So you triage manually. You read, interpret, ask follow-up questions, and decide who should handle it. Then you chase the owner when it goes quiet. Honestly, the time loss isn’t just in the routing. It’s the context switching and the constant “Did anyone pick this up?” mental load.
It adds up fast, especially once your team has more than a few people and requests start arriving all day.
- Requests get bounced between people because the first person who sees it is rarely the right owner.
- Important details are missing, so you end up asking the same clarifying questions in Slack and email.
- Nothing is consistently tracked, which means recurring problems never become a process.
- Response times stretch out because ownership is unclear, not because the task is hard.
The Fix: AI-Based Routing to the Right Sub-Workflow
This workflow acts like a smart intake desk. A request arrives via webhook (which can be connected to Slack, Gmail, a form, or anything that can send an HTTP call). An AI routing assistant reads the raw text and figures out two things: what the user is trying to accomplish, and which specialized “agent” should handle it. Then the workflow forces that AI output into a strict JSON format (so it doesn’t drift into vague text), checks it, and auto-repairs it if needed. Finally, n8n routes the request to the right sub-workflow (Reminder, Email, Meeting, or Document) and returns a clean response.
The flow starts with one intake point and one decision. After that, each request goes down a dedicated path with its own logic, which keeps your automation clean as it grows. Your team sees a clear handoff instead of a “someone should do this” message floating around.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say your team handles about 30 inbound requests a week across Slack and Gmail. Manually, you might spend 5 minutes reading, clarifying, and deciding ownership for each one, which is roughly 2.5 hours weekly, and that’s before the “who’s on it?” follow-ups. With this workflow, submitting the request takes about 1 minute, then the routing happens automatically while you move on. You still do the actual work, but the intake and handoff stop stealing your attention.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Slack as a request intake channel (optional).
- Gmail to pull in and classify email requests (optional).
- OpenRouter API key (get it from your OpenRouter dashboard).
Skill level: Intermediate. You will connect credentials and map a few fields, plus point the “Execute Workflow” nodes at your sub-workflows.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A request hits your intake webhook. That webhook can sit behind a Slack shortcut, a Gmail forwarder, a simple form, or an internal tool. The workflow receives raw text plus any metadata you pass in (like requester, channel, or email subject).
The AI routing assistant classifies the request. Using an OpenRouter chat model (GPT-4o Mini style), it identifies the best “Agent Name” (Reminder, Email, Meeting, Document) and extracts the cleaned-up user request so downstream steps don’t need to interpret messy language.
Structured parsing keeps outputs reliable. The workflow runs the model output through a structured output parser, and an auto-fixing parser repairs formatting if the model returns something slightly off. This is where a lot of AI automations fail, so it’s a big deal that this is built in.
The switchboard routes to the right sub-workflow. n8n uses the “Agent Name” field to choose which Execute Workflow node runs. Each agent can do its own thing, like drafting a reply, creating a reminder, or preparing a document request checklist.
A clean response comes back immediately. The workflow responds to the webhook with the agent’s result (separate webhook reply nodes exist for each path), which makes it easy to display the response back in Slack, a web app, or a Gmail-based process.
You can easily modify the agent categories to match your team (for example, add “Billing Agent” or “Support Agent”) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
This workflow starts when an external system sends a POST request to the webhook endpoint.
- Add and open Incoming Webhook Trigger.
- Set HTTP Method to
POST. - Set Path to
3576c6b9-11a2-4375-b7cb-f58e36557a7b. - Set Response Mode to
responseNodeso downstream respondToWebhook nodes can reply.
Step 2: Configure AI Routing with Memory and Parsers
Incoming Webhook Trigger feeds into AI Routing Assistant, which uses a language model, memory, and output parsers to decide the route.
- Open AI Routing Assistant and confirm Prompt Type is
define. - Set the prompt in Text to the full router instructions provided, preserving the expression
{{ $json.body.message }}and the guidance for Reminder, Email, Meeting, and Document routes. - Ensure Has Output Parser is enabled so the agent uses the parser chain.
- Open Postgres Conversation Memory and set Session Key to
{{ $('Incoming Webhook Trigger').item.json.body.message }}and Session ID Type tocustomKey. - Credential Required: Connect your postgres credentials in Postgres Conversation Memory.
AI sub-nodes are linked to AI Routing Assistant via dedicated connections:
- Compact GPT Model is connected as the language model for AI Routing Assistant — Credential Required: Connect your openRouterApi credentials in Compact GPT Model.
- Auto Repair Output Parser and Structured Result Parser are output parser tools — configure any model credentials on their parent nodes (Parser Model LLM and AI Routing Assistant), not on the parsers themselves.
- Parser Model LLM is connected to Auto Repair Output Parser — Credential Required: Connect your openRouterApi credentials in Parser Model LLM.
Step 3: Define Routing Rules in the Switchboard
AI Routing Assistant outputs a structured result, which Routing Switchboard uses to branch to the correct sub-workflow.
- Open Routing Switchboard and set conditions to match the output parser field
{{ $json.output["Agent Name"] }}. - Add rules for the four agents:
Reminder Agent,Email Agent,Meeting Agent, andDocument Agent. - Confirm that each rule routes to its corresponding execute node from the switch output.
Execution Flow: Incoming Webhook Trigger → AI Routing Assistant → Routing Switchboard → routed to one of the four sub-workflows (no parallel execution).
Step 4: Configure Sub-Workflow Execution Inputs
Each route invokes a different sub-workflow and passes the user’s message as input.
- Open Run Sub-Workflow (Configure Required) A and select the target workflow in Workflow ID.
- Map Workflow Inputs so Query is set to
{{ $json.output["user input"] }}. - Open Run Sub-Workflow (Configure Required) B and select the Email sub-workflow in Workflow ID.
- Map User Input to
{{ $json.output["user input"] }}. - Repeat for Run Sub-Workflow (Configure Required) C and Run Sub-Workflow (Configure Required) D, mapping User Input to
{{ $json.output["user input"] }}.
⚠️ Common Pitfall: The execute nodes have empty Workflow ID values. You must select the correct sub-workflows for A, B, C, and D or the routing will fail.
Step 5: Configure Webhook Replies for Each Route
Each sub-workflow returns a response that is sent back to the original webhook request.
- Open Reminder Webhook Reply and set Respond With to
text. - Set Response Body to
{{ $json.output }}so the result is returned to the caller. - Repeat the same settings for Email Webhook Reply, Meeting Webhook Reply, and Document Webhook Reply.
Step 6: Test and Activate Your Workflow
Validate that routing and responses work end-to-end before turning it on in production.
- Click Execute Workflow and send a POST request to the webhook URL with a JSON body containing
{"message":"Remind me tomorrow at 9 AM"}. - Verify that AI Routing Assistant outputs a structured result containing
Agent Name,sessionID, anduser input. - Confirm the correct branch triggers and the corresponding webhook reply node returns a
200response with the sub-workflow output. - Once verified, toggle the workflow to Active for production use.
Watch Out For
- OpenRouter credentials can expire or need specific permissions. If things break, check your OpenRouter dashboard and the n8n credential test first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About an hour if your sub-workflows already exist.
Yes, but you’ll want one person who’s comfortable testing webhooks and credentials. No coding, just careful setup and a bit of patience during the first run.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenRouter LLM usage costs, which are usually small per request but add up with high volume.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can add or change agent types by editing the routing prompt in the AI Routing Assistant and matching those names in the Routing Switchboard. If you want “Support Agent” and “Billing Agent,” create two new sub-workflows, point new Execute Workflow nodes at them, then update the structured output schema so the parser enforces the new options. Common customizations include extracting priority, assigning an owner based on keywords or domain, and logging every request to Google Sheets for reporting.
Usually it’s an invalid or expired API key in your n8n credentials. It can also be a model name mismatch (for example, the workflow expects a specific GPT-4o Mini option) or rate limiting if you’re routing lots of requests at once. Check the execution logs on the model nodes first, because they’ll show the actual error returned by the API.
On n8n Cloud, capacity depends on your plan’s monthly executions, while self-hosting mainly depends on your server. In practice, most teams run hundreds of routed requests a day without issues as long as the LLM provider isn’t throttling you.
Often, yes. The “hard part” here is reliable AI routing with structured output, plus branching into multiple specialized paths, and n8n handles that kind of logic cleanly without turning into a pricing puzzle. Zapier and Make can do it, but you’ll usually feel boxed in once you add more agents, want retries, or need memory for better classification. n8n also gives you the self-host option, which is a big deal when volume grows. If you’re only routing two simple categories, a lighter tool may be fine. If you’re unsure, Talk to an automation expert and map it to your actual volume.
Once routing is automatic, requests stop feeling like interruptions and start behaving like a system. Set it up, tune the agent categories, and let the workflow do the sorting while you do work that actually matters.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.