WhatsApp + OpenAI, consistent support replies
Your support inbox looks calm. Until it doesn’t. One WhatsApp message turns into ten, someone asks a follow-up you can’t find, and now you’re rewriting the same “shipping policy” reply for the fifth time today.
This WhatsApp OpenAI replies setup hits support leads hardest, but founders and account managers feel it too. You want fast responses that sound like your business, keep context across a conversation, and don’t drop the ball when someone sends a voice note or a PDF.
This guide breaks down the workflow that does exactly that: ingests WhatsApp messages (text and media), preserves memory, routes the request to the right “agent,” then sends a clean, bite-sized reply back.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: WhatsApp + OpenAI, consistent support replies
flowchart LR
subgraph sg0["OpenAI Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Redis"]
n1@{ icon: "mdi:cog", form: "rounded", label: "Wait", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Redis1"]
n3@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n4@{ icon: "mdi:cog", form: "rounded", label: "No Operation, do nothing", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields1", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Responde texto"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/redis.svg' width='40' height='40' /></div><br/>Redis2"]
n8@{ icon: "mdi:cog", form: "rounded", label: "Convert to File", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "Convert to File1", pos: "b", h: 48 }
n11@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields3", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Switch", pos: "b", h: 48 }
n13@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Switch1", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Responde imagem"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Responde pdf"]
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Responde vídeo"]
n17@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields4", pos: "b", h: 48 }
n18@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields5", pos: "b", h: 48 }
n19@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n20@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n21@{ icon: "mdi:cog", form: "rounded", label: "Replace Me", pos: "b", h: 48 }
n22@{ icon: "mdi:cog", form: "rounded", label: "Wait1", pos: "b", h: 48 }
n23@{ icon: "mdi:swap-vertical", form: "rounded", label: "Variáveis Globais", pos: "b", h: 48 }
n24@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If1", pos: "b", h: 48 }
n25["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/supabase.svg' width='40' height='40' /></div><br/>Supabase1"]
n26@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If2", pos: "b", h: 48 }
n27@{ icon: "mdi:robot", form: "rounded", label: "OpenAI", pos: "b", h: 48 }
n28@{ icon: "mdi:robot", form: "rounded", label: "OpenAI1", pos: "b", h: 48 }
n29@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n30@{ icon: "mdi:cog", form: "rounded", label: "Crypto", pos: "b", h: 48 }
n31@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n32@{ icon: "mdi:memory", form: "rounded", label: "Postgres Chat Memory", pos: "b", h: 48 }
n33["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook"]
n34["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge"]
n35@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields8", pos: "b", h: 48 }
n36@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields9", pos: "b", h: 48 }
n37@{ icon: "mdi:swap-vertical", form: "rounded", label: "separa o base1", pos: "b", h: 48 }
n38@{ icon: "mdi:cog", form: "rounded", label: "Converte documento", pos: "b", h: 48 }
n39@{ icon: "mdi:cog", form: "rounded", label: "Extract from File", pos: "b", h: 48 }
n40@{ icon: "mdi:swap-vertical", form: "rounded", label: "separa o telefone e texto3", pos: "b", h: 48 }
n41@{ icon: "mdi:robot", form: "rounded", label: "OpenAI2", pos: "b", h: 48 }
n42@{ icon: "mdi:cog", form: "rounded", label: "Extract from File1", pos: "b", h: 48 }
n43["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Responde audio"]
n44["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/supabase.svg' width='40' height='40' /></div><br/>Supabase3"]
n45@{ icon: "mdi:swap-vertical", form: "rounded", label: "Encerrado", pos: "b", h: 48 }
n46@{ icon: "mdi:wrench", form: "rounded", label: "leadQualificado", pos: "b", h: 48 }
n47@{ icon: "mdi:cog", form: "rounded", label: "Wait3", pos: "b", h: 48 }
n48["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/supabase.svg' width='40' height='40' /></div><br/>Supabase"]
n3 --> n5
n3 --> n4
n24 --> n26
n24 --> n30
n26 --> n44
n1 --> n2
n34 --> n0
n0 --> n1
n22 --> n13
n47 --> n48
n30 --> n25
n27 --> n35
n2 --> n3
n7 --> n31
n12 --> n17
n12 --> n9
n12 --> n11
n12 --> n37
n28 --> n36
n41 --> n42
n13 --> n41
n13 --> n6
n13 --> n14
n13 --> n15
n13 --> n16
n33 --> n23
n31 --> n18
n48 --> n24
n45 -.-> n31
n19 --> n20
n25 --> n47
n44 --> n12
n9 --> n8
n5 --> n7
n11 --> n10
n17 --> n34
n18 --> n19
n35 --> n34
n36 --> n34
n15 --> n20
n43 --> n20
n6 --> n20
n37 --> n38
n8 --> n27
n20 --> n21
n20 --> n22
n14 --> n20
n16 --> n20
n46 -.-> n31
n10 --> n28
n39 --> n40
n29 -.-> n31
n38 --> n39
n42 --> n43
n23 --> n48
n32 -.-> n31
n40 --> n34
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n27,n28,n31,n41 ai
class n29 aiModel
class n46 ai
class n32 ai
class n3,n12,n13,n24,n26 decision
class n0,n2,n7 database
class n6,n14,n15,n16,n33,n43 api
classDef customIcon fill:none,stroke:none
class n0,n2,n6,n7,n14,n15,n16,n25,n33,n34,n43,n44,n48 customIcon
Why This Matters: Consistent Replies Without Losing Context
Manual WhatsApp support breaks in boring, expensive ways. A customer asks “Can I change my booking?” and you answer, then they reply two hours later with a screenshot and now you’re scrolling forever to remember what you already promised. Meanwhile another chat needs a simple template reply, but it gets delayed because you’re stuck handling a voice message you have to listen to twice. Add team handoffs and it gets worse. The customer hears three different “voices,” and you end up doing cleanup work that doesn’t show up anywhere on a dashboard.
It adds up fast. Here’s where the friction compounds.
- Agents answer from memory instead of a shared source of truth, so the same question gets three different replies in one day.
- Voice notes, images, and PDFs create a second workflow (download, open, interpret, respond) that drags response times out.
- Follow-ups get missed because there’s no reliable “conversation state” tracking across multiple turns.
- When you finally try to scale, the only “system” is whoever has the most patience for repetitive typing.
What You’ll Build: A Context-Aware WhatsApp Support Brain
This workflow turns incoming WhatsApp messages into consistent, on-brand replies using OpenAI, while keeping the thread of the conversation intact. It starts when a webhook receives a WhatsApp event (text, audio, image, or document). The workflow extracts key identifiers like the chat ID (remoteJid) and message metadata, then looks up or stores context so the next message continues the same “story.” Next, routing logic decides what kind of message it is and which specialized agent should handle it (for example, scheduling versus an info request versus media analysis). OpenAI generates the reply in the right style, long responses get split into smaller chunks for a better WhatsApp experience, and the automation sends the final message back through your messaging API endpoint.
In practical terms, you drop a message into your support number and the system takes over. Text gets answered immediately with memory-aware replies. Media gets converted and interpreted first, then answered with the correct format (text, audio, image, or file) so the customer doesn’t have to guess what to do next.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you handle about 30 WhatsApp conversations a day, and around 10 of them include a voice note, image, or PDF. Manually, you’re often spending about 5 minutes on a simple text thread, and closer to 10 minutes when media is involved, which is roughly 4 hours a day of support time. With this workflow, the “human time” becomes checking edge cases: maybe 30–60 seconds to skim and approve an answer, while the parsing, context lookup, and sending happens automatically. For many teams, that’s about 2 to 3 hours back on a normal day.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- WhatsApp messaging API endpoint for receiving and sending messages
- OpenAI API access to generate consistent support replies
- Redis server credentials (get it from your Redis host or self-hosted Redis)
Skill level: Intermediate. You’ll be connecting credentials, setting environment variables, and testing webhook payloads, but you won’t be writing code.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A webhook receives the WhatsApp message. Your messaging provider posts an event to n8n, and the workflow captures the sender ID (remoteJid), message type, and any attached media metadata.
The payload gets cleaned up and classified. “Set” and “Switch” steps map fields into a consistent structure, then the flow routes messages by type (text versus document versus audio or image). For documents, it converts the file and extracts usable content first.
Context is stored and recalled. Redis nodes push/get conversation state so follow-ups don’t feel like a fresh ticket every time. On top of that, the AI Agent uses chat memory so it can reply like it has been paying attention.
OpenAI generates the reply and n8n sends it back. The agent selects the right tool path, builds a response, splits long outputs into WhatsApp-friendly chunks, and finally uses HTTP Request nodes to send text, audio, images, PDFs, or videos back to the customer.
You can easily modify the routing rules to match your categories (sales, support, bookings) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
This workflow starts with an inbound webhook that captures the request payload and passes it into the processing chain.
- Add the Incoming Webhook Trigger node and keep the default webhook settings unless your source system requires a specific method or path.
- Execute the node once to generate the test URL, then configure your external system to post data to that URL.
- Connect Incoming Webhook Trigger to Set Global Vars to standardize inbound fields.
⚠️ Common Pitfall: Forgetting to execute Incoming Webhook Trigger once will leave the test URL blank.
Step 2: Connect Supabase and Gate the Workflow
Supabase is used to fetch and persist data while conditional branches determine the next actions.
- Connect Set Global Vars to Supabase Query Main to retrieve or validate records.
- From Supabase Query Main, route to Conditional Branch A, then to Conditional Branch B on the primary path.
- On the alternate path of Conditional Branch A, send data to Encrypt Data and then Supabase Query A.
- After Supabase Query A, connect to Pause Execution C, which loops back into Supabase Query Main for re-checking.
- From Conditional Branch B, connect to Supabase Query B, then to Route by Type for content handling.
⚠️ Common Pitfall: Missing Supabase credentials will cause Supabase Query Main, Supabase Query A, and Supabase Query B to fail. Add credentials before testing.
Step 3: Prepare Payload Mapping, Files, and Initial AI Calls
Multiple Set and file nodes normalize data, create files, and send input to OpenAI for analysis.
- Use Route by Type to direct data into the appropriate mapping path: Map Fields D, Map Fields B, Map Fields C, or Split Base Payload.
- From Split Base Payload, connect to Convert Document → Extract File Data → Split Phone Text to parse inbound documents.
- Send mapped data from Map Fields B to Build File → OpenAI Request A → Map Fields F.
- Send mapped data from Map Fields C to Build File B → OpenAI Request B → Map Fields G.
- Merge outputs in Combine Streams from Map Fields D, Map Fields F, Map Fields G, and Split Phone Text.
There are 10+ set nodes (e.g., Set Global Vars, Map Fields A through Map Fields G, Split Base Payload, Split Phone Text). Group them by function to keep your field mapping organized.
Step 4: Configure Caching and Flow Control
Redis and Wait nodes create a timed loop that persists data and gates the AI agent step.
- Connect Combine Streams to Cache Store, then to Pause Execution, and onward to Cache Store 2.
- Route Cache Store 2 into Conditional Gate to decide whether to continue or pause.
- On the true path, connect Conditional Gate → Map Fields A → Cache Store 3 → AI Agent Core.
- On the false path of Conditional Gate, send data to No-Op Placeholder for safe termination.
⚠️ Common Pitfall: If Redis credentials are not configured, Cache Store, Cache Store 2, and Cache Store 3 will fail and the loop will break.
Step 5: Set Up the AI Agent and Tools
The AI agent uses a chat model, memory, and tools to enrich and qualify responses.
- Open AI Agent Core and verify it connects to Chat Model Provider for language output.
- Ensure Postgres Memory is connected to AI Agent Core to persist conversation state.
- Attach Closed Status Tool and Qualified Lead Tool as tools in AI Agent Core for status updates and lead qualification.
- Continue the flow from AI Agent Core to Map Fields E for standardized outputs.
Chat Model Provider, Postgres Memory, Closed Status Tool, and Qualified Lead Tool are sub-nodes connected to AI Agent Core. Add credentials in the parent or linked nodes (not inside the tools themselves).
Step 6: Configure Batch Handling and Media Responses
Records are split into batches, media outputs are routed, and final replies are sent via HTTP.
- From Map Fields E, connect to Split Records and then Batch Iterator to control throughput.
- From Batch Iterator, keep both outputs: one to Placeholder Step and one to Pause Execution B.
- Send Pause Execution B into Route Media Output, which branches into either OpenAI Request C (audio path) or directly to Send Text Reply, Send Image Reply, Send PDF Reply, and Send Video Reply.
- For the audio path, chain OpenAI Request C → Extract File Data B → Send Audio Reply.
- Ensure every send node (Send Text Reply, Send Image Reply, Send PDF Reply, Send Video Reply, Send Audio Reply) returns to Batch Iterator so the next record is processed.
⚠️ Common Pitfall: Forgetting to route reply nodes back into Batch Iterator will stop batch processing after the first item.
Step 7: Test & Activate Your Workflow
Run a full test from the webhook trigger and confirm each branch returns expected responses.
- Click Execute Workflow and send a sample payload to Incoming Webhook Trigger.
- Verify records pass through Supabase Query Main, branch correctly via Conditional Branch A and Conditional Branch B, and reach Route by Type.
- Confirm AI outputs appear in Map Fields F and Map Fields G, and merged content reaches AI Agent Core.
- Check that media routing sends the correct output to Send Text Reply, Send Image Reply, Send PDF Reply, Send Video Reply, or Send Audio Reply.
- When tests succeed, toggle the workflow to Active for production use.
Troubleshooting Tips
- OpenAI credentials can expire or be tied to the wrong project. If replies suddenly fail, check your OpenAI API key in n8n Credentials first, then confirm your account still has billing enabled.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Your messaging API endpoint (the HTTP Request “send reply” steps) often requires specific headers and a valid API key. When delivery fails, look at the HTTP status code and the provider’s logs before you touch the AI prompts.
Quick Answers
About 45 minutes if your messaging API and OpenAI key are ready.
No. You’ll mostly connect credentials, paste environment variables, and test with real messages.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage, which is usually a few cents per conversation depending on message length and media processing.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Most customizations happen in the “Route by Type” and “Route Media Output” Switch nodes, plus the AI Agent Core instructions that define tone and boundaries. Common tweaks include adding a “sales” route, changing how you detect intent, swapping Redis keys to separate departments, and sending escalations to Telegram instead of replying automatically.
Usually the issue is on the HTTP Request “Send Text Reply/Send Media Reply” nodes: a wrong endpoint URL, missing auth header, or an expired API key from your messaging provider. Check the execution log for the failing request and confirm the provider received it. Also verify your webhook is reachable from the public internet; if the provider can’t POST to your webhook, nothing else runs. If only media replies fail, it’s often a file URL or content-type mismatch.
On n8n Cloud Starter you can run a healthy number of daily conversations for a small team, and higher tiers handle more. If you self-host, executions aren’t capped by n8n, but you will be limited by your server size, your messaging provider’s rate limits, and how heavy your media processing is. Practically, this workflow is fine for steady support traffic; if you get spikes, add longer waits and queueing via Redis so messages don’t collide.
For this workflow, n8n has a few advantages: more complex logic with unlimited branching at no extra cost, a self-hosting option for unlimited executions, and native AI agent plus memory patterns that are awkward (and pricey) elsewhere. Zapier or Make can be fine for simple “message in → reply out” flows. But once you want stateful context, media handling, routing, and tool sub-workflows, you’ll feel the limits quickly. n8n also gives you clearer execution logs, which honestly matters when a customer says “your bot ignored me.” Talk to an automation expert if you want a quick recommendation for your specific stack.
You get consistent replies, real context, and far fewer “sorry, can you repeat that?” moments. Set it up once, then let the workflow carry the repetitive load.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.