OpenAI + Slack: multilingual replies for ecommerce
Your support inbox gets messy fast. One customer asks in Spanish, another in French, and suddenly your “simple” order-status questions turn into slow back-and-forth, inconsistent answers, and more refunds than you want to admit.
If you’re a support lead, this is where quality slips. Ecommerce operators feel it when reviews drop. And for a marketing manager, it’s painful watching paid traffic convert… then churn because the post-purchase experience is chaotic. This OpenAI Slack chatbot workflow tightens that up with multilingual replies and a clean human handoff.
You’ll set up an n8n automation that detects language, replies using your policies, keeps context across messages, and routes edge cases to Slack so nothing gets missed.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: OpenAI + Slack: multilingual replies for ecommerce
flowchart LR
subgraph sg0["When chat message received Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When chat message received", pos: "b", h: 48 }
n1@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n2@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n4@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model2", pos: "b", h: 48 }
n5@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n6@{ icon: "mdi:swap-vertical", form: "rounded", label: "Ecommerce Language Prompts", pos: "b", h: 48 }
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Keep Only Selected Language"]
n8@{ icon: "mdi:robot", form: "rounded", label: "Detect Language", pos: "b", h: 48 }
n9@{ icon: "mdi:robot", form: "rounded", label: "Chat Agent", pos: "b", h: 48 }
n3 --> n7
n5 -.-> n9
n8 --> n7
n2 -.-> n8
n4 -.-> n9
n1 -.-> n8
n6 --> n3
n0 --> n8
n0 --> n6
n7 --> n9
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n1,n8,n9 ai
class n2,n4 aiModel
class n5 ai
classDef customIcon fill:none,stroke:none
class n7 customIcon
Why This Matters: Multilingual Support Without the Chaos
Most ecommerce teams don’t struggle with “support” as a concept. They struggle with volume, repetition, and the little policy details that get lost when five different people answer the same question in three different languages. One agent says returns are 30 days, another says 14. Someone translates too literally and sounds rude. Then you spend your afternoon fixing misunderstandings instead of fixing the actual customer issue. It’s draining, and it quietly hits revenue through chargebacks, bad reviews, and customers who simply don’t come back.
It adds up fast. Here’s where it usually breaks down.
- Agents end up rewriting the same order-tracking and return answers all day, and it eats hours every week.
- When messages come in Spanish or French, response time slows because someone has to translate, double-check tone, and still be accurate.
- Policy answers drift over time, so customers get different outcomes depending on who replied.
- Edge cases get buried in the queue, which means your “urgent” issues become tomorrow’s fires.
What You’ll Build: A Multilingual Chat Agent With Slack Escalation
This workflow turns incoming chat messages into consistent, on-brand support replies in English, Spanish, and French. It starts when a customer message hits your chat entry point in n8n. An AI “language identification” agent determines which language the customer is using, then the workflow pulls the matching system prompt from your prompt library (so your return policy, shipping rules, and tone stay consistent). Next, a customer-support agent powered by OpenAI GPT-5 Nano generates the actual reply, using conversation memory so the customer doesn’t have to repeat themselves. If the workflow detects missing details or an edge-case request, you can route the conversation to humans in Slack instead of guessing.
The flow begins with the chat trigger and language detection. Then it matches a pre-written language prompt to guide the model. Finally, the agent produces a support-ready reply with context, and your team gets a clear escalation path for anything risky.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say your store gets about 30 chat messages a day, and around half are repeat questions (tracking, returns, address changes). Manually, a careful reply often takes about 8 minutes once you look up the order, translate if needed, and write something polite. That’s roughly 4 hours of typing per day. With this workflow, most of those replies become a quick review and send, maybe 1 minute each, while the AI does the drafting and language handling in the background. You typically get about 3 hours back on a day like that.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- OpenAI for multilingual chat generation
- Slack to receive human-handoff alerts
- OpenAI API key (get it from the OpenAI Platform)
Skill level: Beginner. You will connect accounts, paste prompts, and test a few sample chats.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A customer message triggers the workflow. The “Incoming Chat Trigger” starts everything as soon as a new chat message arrives in n8n, along with whatever metadata you pass in (like a session ID or order number).
The workflow identifies the language and loads the right policy prompt. Your “Commerce Prompt Library” stores separate system prompts for English, Spanish, and French. The language identification agent chooses which one fits, and n8n merges that prompt into the message context so the model follows your rules.
OpenAI drafts a support-grade response with memory. The customer support agent uses the OpenAI chat model plus conversation memory to keep continuity. If the customer already shared an order number two messages ago, the agent can keep moving without asking again.
Edge cases can go to humans in Slack. When the agent can’t safely answer (missing details, unusual requests, policy conflicts), you can route that outcome into Slack so your team handles it with full context.
You can easily modify the prompt library to add more languages or change tone based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Chat Trigger
Set up the workflow entry point so incoming chat messages start the automation and branch into language detection and prompt preparation.
- Add the Incoming Chat Trigger node as your trigger.
- Keep the default options configuration unless your chat channel needs custom settings.
- Connect Incoming Chat Trigger to both Language Identification and Commerce Prompt Library so they execute at the same time.
Incoming Chat Trigger outputs to both Language Identification and Commerce Prompt Library in parallel.
Step 2: Connect OpenAI for Language and Support Responses
Attach OpenAI models to the AI agents that detect language and respond to customers.
- Open Primary OpenAI Chat and set the model to
gpt-5-nano. - Credential Required: Connect your
openAiApicredentials in Primary OpenAI Chat. - Open Secondary OpenAI Chat and set the model to
gpt-5-nano. - Credential Required: Connect your
openAiApicredentials in Secondary OpenAI Chat. - Ensure Primary OpenAI Chat is connected to Language Identification as the language model and Secondary OpenAI Chat is connected to Customer Support Agent.
Step 3: Set Up Language Identification
Configure the AI parsing so the workflow can reliably detect the language of each incoming message.
- In Language Identification, confirm hasOutputParser is enabled.
- Set the systemMessage to
Identify what language this is written in. output the language. \n\noutput like this. all lower case\n\n{\n\t"language": "English"\n}. - Open Structured Result Parser and set jsonSchemaExample to
{\n\t"language": "English"\n}. - Ensure Structured Result Parser is connected to Language Identification as the output parser.
Step 4: Build the Prompt Library and Split Language Records
Provide multilingual system prompts and split them into individual records for matching.
- In Commerce Prompt Library, keep the assignments array that defines languages with English, Spanish, and French system prompts.
- Confirm the languages array items each contain language and system_prompt values.
- In Split Language Records, set fieldToSplitOut to
languages. - Connect Commerce Prompt Library → Split Language Records to output one language record per item.
Step 5: Match Language and Generate the Support Reply
Combine detected language with the correct prompt and respond to the customer with memory-enabled context.
- In Match Chosen Language, set mode to
combineand enable advanced. - Set mergeByFields to match
languagewithoutput.language. - Connect Split Language Records to Match Chosen Language and Language Identification to Match Chosen Language (index 1).
- In Conversation Memory, set sessionIdType to
customKeyand sessionKey to{{ $('Incoming Chat Trigger').item.json.sessionId }}. - In Customer Support Agent, set text to
{{ $('Incoming Chat Trigger').item.json.chatInput }}and systemMessage to{{ $json.system_prompt }}. - Ensure Conversation Memory is connected to Customer Support Agent via the ai_memory port.
english, spanish, french) in Commerce Prompt Library to match the output from Language Identification.Step 6: Test and Activate Your Workflow
Validate that the automation detects language, selects the correct prompt, and responds with the proper tone before enabling it in production.
- Click Execute Workflow and send a test message through Incoming Chat Trigger.
- Verify Language Identification outputs a JSON object with
languageand that Match Chosen Language merges a matching prompt. - Confirm Customer Support Agent responds in the same language as the input and uses the corresponding system prompt.
- Once verified, toggle the workflow to Active to handle live chats.
Troubleshooting Tips
- OpenAI credentials can expire or fail if billing is empty. If responses stop, check your OpenAI API key in n8n and confirm your OpenAI billing account has funds.
- If you’re using Wait nodes or any external steps you add later, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your store policies and brand voice inside the “Commerce Prompt Library” set node or you will be editing outputs nonstop.
Quick Answers
About 30 minutes if your OpenAI key is ready and Slack is connected.
No. You’ll mostly edit prompts and connect credentials in n8n.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which are usually a few cents per conversation depending on message length.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s mostly prompt work. Update the “Commerce Prompt Library” set node to add languages, swap policies, or change tone for different brands. If you want more strict outputs, tighten the “Customer Support Agent” instructions and keep a short list of allowed actions (tracking, returns, exchanges). You can also replace the language identification agent with a simple language-detection step if you prefer more predictable routing.
Usually it’s an invalid API key or a billing issue on the OpenAI account. Regenerate your key, update the OpenAI credentials in n8n, and confirm billing has funds. If it still fails, check for model access problems (wrong model selected) or rate limiting when many chats arrive at once.
On n8n Cloud, the limit depends on your plan’s monthly executions, and self-hosting has no fixed execution cap (it depends on your server). Practically, most small stores can run hundreds of chats per day without changing anything, as long as OpenAI usage and your server resources keep up. If you expect spikes, add queueing or separate workflows for “draft response” and “human handoff.”
Often, yes. n8n is simply better suited to agent-style workflows where you need memory, branching, and prompt libraries, and you don’t want costs to explode when volume rises. It’s also easier to keep everything in one place: language detection, prompt selection, and the final response. Zapier and Make can work if you’re doing something very basic, like “send every chat to Slack,” but once you need logic and context, it gets fiddly. If you’re not sure, Talk to an automation expert and you’ll get a straight recommendation based on your volume and risk tolerance.
Set this up once, and your support team stops retyping the same answers in three languages. The workflow handles the repetitive parts, and Slack keeps humans in the loop when it actually matters.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.