Onfleet + Slack: dispatch changes handled for you
Dispatch changes usually start as a quick Slack message. Then it turns into a little scavenger hunt through Onfleet, copied IDs, double-checking which task you meant, and a “wait, did anyone actually update it?” moment. That’s where mistakes creep in.
If you’re a dispatcher, you feel this daily. A ops manager sees the knock-on effects when routes drift. And a customer support lead gets stuck translating “can you move this stop?” into the right Onfleet change. This Onfleet Slack automation turns those Slack requests into real Onfleet actions, consistently.
You’ll see what the workflow does, what you need to run it, and how it reduces the back-and-forth that slows down dispatch when things get busy.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: Onfleet + Slack: dispatch changes handled for you
flowchart LR
subgraph sg0["Onfleet MCP Gateway Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Onfleet MCP Gateway", pos: "b", h: 48 }
n1@{ icon: "mdi:cog", form: "rounded", label: "Generate Admin User", pos: "b", h: 48 }
n2@{ icon: "mdi:cog", form: "rounded", label: "Remove Admin User", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Admin List", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "Modify Admin User", pos: "b", h: 48 }
n5@{ icon: "mdi:cog", form: "rounded", label: "Append Tasks", pos: "b", h: 48 }
n6@{ icon: "mdi:cog", form: "rounded", label: "Fetch Container Info", pos: "b", h: 48 }
n7@{ icon: "mdi:cog", form: "rounded", label: "Revise Tasks", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Generate Destination", pos: "b", h: 48 }
n9@{ icon: "mdi:cog", form: "rounded", label: "Fetch Destination", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "Generate Hub", pos: "b", h: 48 }
n11@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Hub List", pos: "b", h: 48 }
n12@{ icon: "mdi:cog", form: "rounded", label: "Modify Hub", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Fetch Organization Profile", pos: "b", h: 48 }
n14@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Delegatee Details", pos: "b", h: 48 }
n15@{ icon: "mdi:cog", form: "rounded", label: "Generate Recipient", pos: "b", h: 48 }
n16@{ icon: "mdi:cog", form: "rounded", label: "Fetch Recipient", pos: "b", h: 48 }
n17@{ icon: "mdi:cog", form: "rounded", label: "Modify Recipient", pos: "b", h: 48 }
n18@{ icon: "mdi:cog", form: "rounded", label: "Duplicate Task", pos: "b", h: 48 }
n19@{ icon: "mdi:cog", form: "rounded", label: "Finalize Task", pos: "b", h: 48 }
n20@{ icon: "mdi:cog", form: "rounded", label: "Generate Task", pos: "b", h: 48 }
n21@{ icon: "mdi:cog", form: "rounded", label: "Remove Task", pos: "b", h: 48 }
n22@{ icon: "mdi:cog", form: "rounded", label: "Fetch Task", pos: "b", h: 48 }
n23@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Task List", pos: "b", h: 48 }
n24@{ icon: "mdi:cog", form: "rounded", label: "Modify Task", pos: "b", h: 48 }
n25@{ icon: "mdi:cog", form: "rounded", label: "Auto Dispatch Team", pos: "b", h: 48 }
n26@{ icon: "mdi:cog", form: "rounded", label: "Generate Team", pos: "b", h: 48 }
n27@{ icon: "mdi:cog", form: "rounded", label: "Remove Team", pos: "b", h: 48 }
n28@{ icon: "mdi:cog", form: "rounded", label: "Fetch Team", pos: "b", h: 48 }
n29@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Team List", pos: "b", h: 48 }
n30@{ icon: "mdi:cog", form: "rounded", label: "Team Time Estimates", pos: "b", h: 48 }
n31@{ icon: "mdi:cog", form: "rounded", label: "Modify Team", pos: "b", h: 48 }
n32@{ icon: "mdi:cog", form: "rounded", label: "Generate Worker", pos: "b", h: 48 }
n33@{ icon: "mdi:cog", form: "rounded", label: "Remove Worker", pos: "b", h: 48 }
n34@{ icon: "mdi:cog", form: "rounded", label: "Fetch Worker", pos: "b", h: 48 }
n35@{ icon: "mdi:cog", form: "rounded", label: "Retrieve Worker List", pos: "b", h: 48 }
n36@{ icon: "mdi:cog", form: "rounded", label: "Fetch Worker Schedule", pos: "b", h: 48 }
n37@{ icon: "mdi:cog", form: "rounded", label: "Modify Worker", pos: "b", h: 48 }
n5 -.-> n0
n22 -.-> n0
n28 -.-> n0
n18 -.-> n0
n10 -.-> n0
n34 -.-> n0
n12 -.-> n0
n7 -.-> n0
n20 -.-> n0
n26 -.-> n0
n21 -.-> n0
n27 -.-> n0
n11 -.-> n0
n24 -.-> n0
n31 -.-> n0
n23 -.-> n0
n29 -.-> n0
n19 -.-> n0
n32 -.-> n0
n1 -.-> n0
n33 -.-> n0
n2 -.-> n0
n6 -.-> n0
n16 -.-> n0
n3 -.-> n0
n37 -.-> n0
n4 -.-> n0
n35 -.-> n0
n9 -.-> n0
n15 -.-> n0
n17 -.-> n0
n13 -.-> n0
n25 -.-> n0
n8 -.-> n0
n14 -.-> n0
n36 -.-> n0
n30 -.-> n0
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
The Challenge: Dispatch requests get lost between Slack and Onfleet
Most teams don’t have a “system” for dispatch updates. They have Slack. A driver calls in sick, a delivery window changes, a customer asks to reroute, and the request lands in a channel that’s already moving fast. Someone has to interpret the message, open Onfleet, find the right task or worker, and make the update without breaking anything. And honestly, the hard part isn’t clicking buttons. It’s context switching, hunting for IDs, and second-guessing what the requester meant.
It adds up fast. Here’s where it breaks down in real operations.
- A simple “move this task to tomorrow” turns into 10 minutes of searching and confirming details.
- Updates get applied to the wrong task because Slack messages rarely include a clean task ID or a consistent format.
- People ask for status in Slack because they can’t tell what changed in Onfleet, which creates even more messages.
- When you’re doing this under pressure, the odds of forgetting a hub change or worker update go way up.
The Fix: Slack requests translated into Onfleet operations automatically
This workflow sets up an MCP (Model Context Protocol) server inside n8n that exposes Onfleet’s operations as “tools” an AI agent can call safely. In plain English: instead of a dispatcher manually turning a Slack message into a series of Onfleet updates, an AI agent can interpret the request, pick the correct Onfleet action (create/update task, update worker, auto-dispatch a team, and more), and pass the right parameters through. The workflow is built around Onfleet’s official n8n integration, so you’re not stitching together fragile API calls. It’s also “AI-ready” out of the box because tool parameters are designed to be filled from AI-provided values using n8n’s $fromAI() placeholders.
The workflow begins when an AI agent (or another workflow) hits the MCP trigger URL. From there, the agent selects the correct Onfleet tool operation and n8n executes it with error handling and logging. The result is a clean, structured Onfleet response that your agent can relay back to Slack as confirmation.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say your team processes about 20 dispatch changes a day (reroutes, worker updates, task edits). Manually, it’s easy to spend roughly 5 minutes per change between opening Onfleet, finding the right record, editing, and confirming in Slack. That’s around 100 minutes daily. With this workflow: the request comes in, the agent calls the right Onfleet action, and a confirmation message can be posted back in about a minute of human attention. You’re buying back roughly an hour a day, and the updates are more consistent.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Onfleet to manage tasks, teams, and drivers.
- Slack for dispatch requests and confirmations.
- OpenAI API key (get it from the OpenAI dashboard)
Skill level: Intermediate. You’ll be pasting webhook URLs, connecting credentials, and tweaking prompts for your dispatch language.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
An agent calls your MCP endpoint. You activate the workflow, copy the MCP trigger webhook URL, and connect it to your AI agent layer (this could be a Slack-based agent, Claude Desktop, or a custom internal tool).
The request is interpreted into structured inputs. Using the AI Agent plus the OpenAI Chat Model, the workflow turns messy, human phrasing (“swap driver for the 2pm run” or “push this stop to tomorrow morning”) into the fields Onfleet expects.
Onfleet actions run through official tool operations. n8n executes the chosen Onfleet operation (create/update tasks, auto-dispatch, worker updates, team lookups, and more). The workflow includes the full set of 37 operations, so you’re not boxed into a narrow use case.
Results come back as clean responses. The tool returns the native Onfleet response structure, which your agent can summarize back into Slack so people see what changed without logging into Onfleet.
You can easily modify the dispatch language and confirmation format to match how your team talks in Slack. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Onfleet MCP Gateway Trigger
This workflow starts from the Onfleet MCP Gateway trigger, which exposes the tool endpoints for all connected Onfleet operations.
- Add or open Onfleet MCP Gateway as the trigger node.
- Keep the default trigger settings (no parameters are required in this workflow).
- Ensure the workflow is saved so the MCP endpoint is generated and available.
Step 2: Connect Onfleet Credentials
All onfleetTool nodes are connected to Onfleet MCP Gateway as AI tools. Credentials must be added to the parent trigger node, not each tool node.
- Open Onfleet MCP Gateway and add Onfleet credentials for the MCP tools to authenticate.
- Confirm the tools connected to Onfleet MCP Gateway include admin, task, team, hub, destination, and worker management nodes.
- Keep Flowpast Branding as a documentation-only sticky note (no configuration needed).
Step 3: Set Up Admin and Organization Tools
These tools handle organization-level administration and profile operations used by the MCP interface.
- Verify the admin tools are connected as AI tools: Generate Admin User, Remove Admin User, Retrieve Admin List, and Modify Admin User.
- Confirm organization operations are present: Fetch Organization Profile and Retrieve Delegatee Details.
- No additional parameters are required in these nodes for this workflow template.
Step 4: Configure Destination, Recipient, Hub, and Task Operations
This group handles addressable objects and task lifecycle actions exposed via MCP tools.
- Confirm destination tools are connected: Generate Destination and Fetch Destination.
- Confirm recipient tools are connected: Generate Recipient, Fetch Recipient, and Modify Recipient.
- Confirm hub tools are connected: Generate Hub, Retrieve Hub List, and Modify Hub.
- Confirm task tools are connected: Generate Task, Fetch Task, Retrieve Task List, Modify Task, Append Tasks, Revise Tasks, Duplicate Task, Finalize Task, and Remove Task.
- Keep Fetch Container Info available for container metadata retrieval.
Step 5: Configure Team and Worker Management Tools
These tools manage dispatch and workforce operations for Onfleet teams and workers.
- Confirm team tools are connected: Generate Team, Fetch Team, Retrieve Team List, Modify Team, Remove Team, Team Time Estimates, and Auto Dispatch Team.
- Confirm worker tools are connected: Generate Worker, Fetch Worker, Retrieve Worker List, Modify Worker, Remove Worker, and Fetch Worker Schedule.
- No node parameters are required for these tools in the template—operations are driven by MCP requests at runtime.
Step 6: Test and Activate Your Workflow
Validate the MCP trigger and tool responses before enabling the workflow in production.
- Click Execute Workflow on Onfleet MCP Gateway to simulate a test call.
- Send a sample MCP request and confirm responses from tools like Fetch Task or Retrieve Team List.
- Successful execution should return valid Onfleet data objects without authentication errors.
- Toggle the workflow to Active for production use.
Watch Out For
- Onfleet credentials can expire or need specific permissions. If things break, check the n8n Credentials page and your Onfleet API access first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice and dispatch rules early (naming conventions, how you refer to hubs, what “tomorrow” means) or you will be editing outputs forever.
Common Questions
Usually about 30 minutes if you already have Onfleet and OpenAI credentials ready.
Yes, but you’ll want one person who’s comfortable connecting credentials and testing a few example requests. No coding is required for the basic setup.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (for many teams, a few dollars a month at light usage).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can tailor the AI Agent instructions so your Slack phrasing maps cleanly to the right Onfleet operation, like “Modify Task” for reschedules or “Auto Dispatch Team” when routes need rebalancing. If you prefer stricter control, you can route specific request types through a Switch node so only approved actions run automatically. Common tweaks include adding a confirmation step for high-risk updates, enforcing naming rules for hubs/teams, and formatting the final “done” message so it matches how your dispatch channel works.
Usually it’s an expired or incorrect API key stored in n8n Credentials. Update the Onfleet credential, then re-run a single test call like “Fetch Task” to confirm it works before you retry more complex actions. If it still fails, check that the Onfleet account has access to the resource you’re querying (team, worker, or organization) and that you’re not hitting rate limits during bulk operations.
On n8n Cloud’s entry plans, you can typically handle thousands of executions per month, which is plenty for many dispatch teams. If you self-host, there’s no execution cap; capacity depends on your server size and how many agent calls you run at once. Practically, most teams can process dispatch changes as fast as requests come in, since each Onfleet tool call is lightweight. If you start doing bulk updates (like updating hundreds of tasks), test during off-hours and add throttling.
Often, yes, because this setup is built for complex dispatch logic and a wide Onfleet surface area, not just “send a message when X happens.” You also get self-hosting, which matters when executions spike. Another advantage is tool coverage: this workflow includes the full set of Onfleet operations, so you’re not stuck when the request changes from “update a task” to “update a worker” midstream. Zapier or Make can still be fine for simple two-step alerts, and some teams prefer the simpler UI. If you’re torn, Talk to an automation expert and we’ll map it to your volume and risk tolerance.
Dispatch will always change mid-day. This just makes the changes easier to handle, and a lot harder to mess up.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.