🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

OpenAI to Slack, support answers in one clean chat

Lisa Granqvist Partner Workflow Automation Expert

Support requests don’t arrive politely. They show up in bursts, with missing details, and they always seem to hit right when you’re trying to do real work.

This is where OpenAI Slack automation earns its keep. A Support Lead wants cleaner handoffs. A marketing ops person wants a simple way to triage “is this a lead or a ticket?”. And a founder just wants faster answers without living in the inbox.

This workflow publishes a slick web chat, routes each question to the right AI agent, and pushes the conversations that matter into Slack. You’ll see how it works, what you need, and what to tweak so it fits your process.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: OpenAI to Slack, support answers in one clean chat

The Problem: Support Answers Get Messy Fast

Most teams don’t have a “support system.” They have a pile of places people ask for help. A customer writes in with a vague question, someone pings a teammate, the teammate replies with a guess, and then you get the dreaded follow-up: “Can you clarify?” That loop burns time and trust. The worst part is the mental switching. You’re answering the same categories of questions all day, but in different tabs, with different context, and no consistent voice.

It adds up fast. And the cracks usually show up in the same few spots:

  • People ask “quick questions” that still take 10 minutes to interpret, route, and reply to.
  • Answers vary by who’s online, which means customers get different guidance for the same issue.
  • High-signal conversations get buried, so product feedback and recurring issues never make it to the team.
  • You end up doing copy-paste triage into Slack anyway, usually after you’ve already lost momentum.

The Solution: A Web Chat That Routes to the Right AI (Then Slack)

This workflow gives you a ready-to-use web chat interface (served directly from n8n) and a separate AI-processing endpoint behind it. A user opens your chat page, types a message, and chooses the kind of help they need. Behind the scenes, the workflow sends that request to a webhook, routes it through a Switch based on the chosen agent type, and hands it to the right AI Agent. Each agent can have its own system prompt, tools, and memory, so “general support” doesn’t sound like “database lookup” or “documentation search.” Finally, n8n formats the output consistently and returns the response back to the interface so the user sees a clean answer in real time.

Practically, you get one chat UI and multiple specialized brains. The workflow starts with a GET webhook that serves the HTML interface, then a POST webhook that processes messages. Routing happens in the Switch node, responses are shaped in code, and the output is ready to send to Slack when you want key conversations visible to the team.

What You Get: Automation vs. Results

Example: What This Looks Like

Say you handle about 20 incoming questions a day. Manually, a typical flow is: read the message (2 minutes), decide who owns it (3 minutes), paste into Slack with context (3 minutes), then wait for a teammate or search docs (another 5 minutes). That’s roughly 10 minutes per question, or about 3 hours daily. With this workflow, the user chooses the right agent in the chat, the AI responds in under a minute, and you only push the few important conversations into Slack. You get most of that time back, and the rest feels calmer.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • OpenAI for the chat model powering the agents
  • Slack to send key conversations to your team
  • OpenAI API key (get it from the OpenAI dashboard)

Skill level: Intermediate. You’ll copy the provided UI code, set two webhook paths, and connect your OpenAI (and optional Slack) credentials.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

A visitor opens your chat page. A GET webhook triggers n8n to return a complete HTML/CSS/JS interface, so you can publish a clean support chat without building a frontend.

The message is posted back to n8n. When the user hits Send (and selects an agent button like General, Database, Web, or RAG), the UI makes a POST request to your AI webhook with the message and agent_type.

n8n routes it to the right agent. A Switch node checks agent_type and forwards the request to the matching AI Agent, backed by an OpenAI chat model and optional memory so responses stay coherent over a session.

The response is formatted and returned. A code step normalizes the output into a consistent response field, and “Respond to Webhook” sends it back so the UI can display it immediately. From there, you can also send selected conversations into Slack for visibility.

You can easily modify the agent types to match your real queues (billing, onboarding, bug reports) based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Webhook Trigger

This workflow uses two webhook entry points: one to serve the HTML UI and another to receive agent requests.

  1. Open Inbound Webhook Trigger and set Path to b6f698e9-c16c-4273-8af2-20a958f691c1.
  2. Set Response Mode to responseNode on Inbound Webhook Trigger.
  3. Open Inbound Agent Webhook and set HTTP Method to POST.
  4. Set the Path on Inbound Agent Webhook to webhook-endpoint and Response Mode to responseNode.
Tip: The GET UI and POST agent endpoints are separate. Use the GET URL in a browser to load the interface and the POST URL in your front-end JavaScript.

Step 2: Connect OpenAI

The AI agents rely on OpenAI language models connected via dedicated OpenAI chat engine nodes.

  1. In OpenAI Chat Engine, select the model gpt-4.1-mini and confirm the connection to General AI Agent.
  2. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
  3. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine A (used by Database AI Agent).
  4. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine B (used by Web AI Agent).
  5. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine C (used by RAG AI Agent).
⚠️ Common Pitfall: Do not add credentials directly to General AI Agent, Database AI Agent, Web AI Agent, or RAG AI Agent. The credentials must be added to the connected OpenAI chat engine nodes.

Step 3: Set Up the HTML Interface Generator

The UI is generated dynamically and returned as a binary HTML file for the GET webhook.

  1. Open Generate HTML Interface and confirm the script includes the placeholder const WEBHOOK_URL = '[YOUR_WEBHOOK_URL]';.
  2. Replace [YOUR_WEBHOOK_URL] in Generate HTML Interface with the POST URL for Inbound Agent Webhook.
  3. Verify Return UI Response is set to Respond With binary so the HTML is delivered correctly.
⚠️ Common Pitfall: If the UI loads but sending messages fails, the most common cause is leaving [YOUR_WEBHOOK_URL] unchanged in Generate HTML Interface.

Step 4: Configure Agent Routing and Prompts

Messages are routed based on agent type and processed by the appropriate AI agent node.

  1. Open Route by Agent Type and confirm the rules match the agent types: general, database, web, and rag.
  2. In each rule, ensure the left value is the expression {{ $json.body.agent_type }} and the right value matches the agent type string.
  3. Update General AI Agent Text from c'est un test simplement to your desired instruction.
  4. Set Text in Database AI Agent, Web AI Agent, and RAG AI Agent to the specific prompts your agents should follow.

Step 5: Configure Output Formatting and Responses

Each agent’s output is normalized in a code node and then returned to the requesting client.

  1. Review Format General Output, Format Database Output, Format Web Output, and Format RAG Output to ensure they read input from Inbound Agent Webhook and return JSON with response, agent_type, and timestamp.
  2. Keep the expression reference inside each formatter intact: $('Inbound Agent Webhook').first().json.body.
  3. Confirm each formatter connects to its response node: Format General OutputSend General Response, Format Database OutputSend Database Response, Format Web OutputSend Web Response, and Format RAG OutputSend RAG Response.

Step 6: Test and Activate Your Workflow

Run a manual test to confirm the UI renders and the AI agents respond correctly.

  1. Click Execute Workflow and open the test URL for Inbound Webhook Trigger in your browser to load the interface.
  2. Enter a message, select an agent, and confirm the response appears in the UI and the workflow shows a successful execution path through Route by Agent Type.
  3. Verify that each response node (Send General Response, Send Database Response, Send Web Response, Send RAG Response) returns JSON containing the response field.
  4. Switch the workflow to Active to enable production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • OpenAI credentials can expire or be scoped incorrectly. If things break, check your OpenAI API key status and billing limits in the OpenAI dashboard first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Frequently Asked Questions

How long does it take to set up this OpenAI Slack automation automation?

About an hour if you already have your OpenAI key and n8n running.

Do I need coding skills to automate OpenAI Slack automation?

No. You will paste the provided UI code and edit a webhook URL. The rest is connecting nodes and credentials in n8n.

Is n8n free to use for this OpenAI Slack automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which are usually a few cents per conversation depending on model and prompt size.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this OpenAI Slack automation workflow for billing vs. technical support routing?

Yes, and it’s one of the best reasons to use this template. You can add new agent buttons in the UI (the agent cards in the HTML section) and then mirror that change in n8n by adding a new Switch rule for agent_type. From there, connect the new route to a dedicated AI Agent node with its own prompt (for billing language, refund rules, escalation rules). Common customizations include adding “Sales” and “Onboarding” agents, changing the default tone, and tagging certain keywords so those chats are the ones that get posted into Slack.

Why is my OpenAI connection failing in this workflow?

Usually it’s an invalid or expired API key, or a billing limit on the OpenAI account. Update the credential in n8n, then run a single test execution and check the node error output. If the UI is loading but replies never show, it can also be a mismatched webhook path in the UI code (the WEBHOOK_URL value) pointing at the wrong endpoint.

How many messages can this OpenAI Slack automation handle?

A lot, but it depends on your n8n plan and your OpenAI rate limits.

Is this OpenAI Slack automation better than using Zapier or Make?

For a multi-agent chat interface, n8n is usually the more practical choice. You can serve the UI from a webhook, route by agent type with unlimited branching, and keep the formatting consistent in code without fighting platform limits. Zapier and Make can work for “send this to OpenAI, then post to Slack” flows, but they get awkward when you need multiple agents, memory, and custom response shaping. Also, self-hosting means you’re not paying per tiny step. If you’re unsure, Talk to an automation expert and you’ll get a straight recommendation.

Once this is in place, support stops feeling like whack-a-mole. The workflow handles the routing and the first reply, and Slack only gets the conversations worth a human’s attention.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

\n
\n\n \n \n\n \n
\n \n Or Select a Specialized Agent\n
\n

\n Choose an agent that best fits your needs. Each agent has access to different tools and data sources.\n

\n\n
\n \n
\n
\n \n
\n
Database Search Agent
\n
\n Specialized in searching and querying your database. Perfect for finding specific records or analyzing stored data.\n
\n
\n\n \n
\n
\n \n
\n
Web Search Agent
\n
\n Searches the internet for real-time information, news, and public data. Ideal for current events and research.\n
\n
\n\n \n
\n
\n \n
\n
RAG Knowledge Agent
\n
\n Uses RAG to search your documents and knowledge base. Best for company-specific information.\n
\n
\n
\n
\n\n \n
\n
\n

\n \n AI Response\n

\n \n
\n
\n
\n \n
\n
\n
\n\n \n
\n
\n
\n

Template Guide

\n \n
\n
\n

What is this Template?

\n

\n This is a n8n graphical input template that creates beautiful user interfaces\n for your workflows. Instead of API calls, your users get a clean interface to interact with your AI agents.\n

\n\n

How it Works

\n

Simple 3-node workflow in n8n:

\n
    \n
  • Webhook Node (GET): Receives the HTTP request
  • \n
  • Code Node: Generates the HTML interface
  • \n
  • Respond to Webhook Node: Returns HTML as binary file
  • \n
\n\n

The Agent Buttons

\n

Three specialized agents for different tasks:

\n
    \n
  • Database Agent: Searches your database, runs SQL queries
  • \n
  • Web Agent: Searches the internet for current information
  • \n
  • RAG Agent: Searches your documents and knowledge base
  • \n
\n\n

Connecting to Your Workflow

\n

\n Update the JavaScript functions sendMessage() and sendToAgent()\n to point to your n8n webhook URLs. The message and agent type will be sent to your workflow\n where you can process it with your AI agents.\n

\n\n

Customization

\n
    \n
  • Change colors in the CSS variables section
  • \n
  • Add more agent types by copying the agent-card structure
  • \n
  • Modify the layout and design to match your brand
  • \n
  • Update this guide with your own instructions
  • \n
\n\n

\n Version: ${projectVersion} | Powered by: n8n\n

\n
\n
\n
\n\n \n \n\n\n`;\n\n// === Binary Conversion for n8n ===\nconst buffer = Buffer.from(html, \"utf8\");\n\nreturn [\n {\n json: {\n message: \"AI Agent Interface generated successfully\",\n version: projectVersion\n },\n binary: {\n data: {\n data: buffer.toString(\"base64\"),\n mimeType: \"text/html\"\n },\n },\n },\n];\n" }, "typeVersion": 2 }, { "id": "d0fa19da-d969-4484-b1b4-c876851a8e46", "name": "Return UI Response", "type": "n8n-nodes-base.respondToWebhook", "position": [ 500, 120 ], "parameters": { "options": [], "respondWith": "binary" }, "typeVersion": 1.4 }, { "id": "ec32a56d-3fe8-4155-b47e-da1032c6eadf", "name": "Inbound Agent Webhook", "type": "n8n-nodes-base.webhook", "position": [ 1240, 295 ], "webhookId": "e0181c5e-25b5-498a-b409-49fdacabd523", "parameters": { "path": "webhook-endpoint", "options": [], "httpMethod": "POST", "responseMode": "responseNode" }, "typeVersion": 2.1 }, { "id": "5031a89a-723a-4a70-a945-2190249e224c", "name": "Route by Agent Type", "type": "n8n-nodes-base.switch", "position": [ 910, 320 ], "parameters": { "rules": { "values": [ { "conditions": { "options": { "version": 2, "leftValue": "", "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "e3cd26ff-42a1-4f9a-b720-78db8d579196", "operator": { "name": "filter.operator.equals", "type": "string", "operation": "equals" }, "leftValue": "={{ $json.body.agent_type }}", "rightValue": "general" } ] } }, { "conditions": { "options": { "version": 2, "leftValue": "", "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "c0e4c5ff-0c06-4cbb-97ba-def595e4fff1", "operator": { "name": "filter.operator.equals", "type": "string", "operation": "equals" }, "leftValue": "={{ $json.body.agent_type }}", "rightValue": "database" } ] } }, { "conditions": { "options": { "version": 2, "leftValue": "", "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "9a756446-f021-4d74-bc6b-e4a8fa64bcbe", "operator": { "name": "filter.operator.equals", "type": "string", "operation": "equals" }, "leftValue": "={{ $json.body.agent_type }}", "rightValue": "web" } ] } }, { "conditions": { "options": { "version": 2, "leftValue": "", "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "e61d61c2-7819-4c22-b70f-7410c87a98d7", "operator": { "name": "filter.operator.equals", "type": "string", "operation": "equals" }, "leftValue": "={{ $json.body.agent_type }}", "rightValue": "rag" } ] } } ] }, "options": [] }, "typeVersion": 3.3 }, { "id": "14ab7851-36f6-4d61-a43e-f3ceaf45123b", "name": "General AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [ 560, 265 ], "parameters": { "text": "c'est un test simplement", "options": [], "promptType": "define" }, "typeVersion": 2.2 }, { "id": "71c05943-b548-4d56-9b02-b4c394612d2f", "name": "OpenAI Chat Engine", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [ 775, 210 ], "parameters": { "model": { "__rl": true, "mode": "list", "value": "gpt-4.1-mini" }, "options": [] }, "credentials": { "openAiApi": { "id": "credential-id", "name": "" } }, "typeVersion": 1.2 }, { "id": "20bfc5c1-8093-4533-85fa-49f9f5eff917", "name": "OpenAI Chat Engine A", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [ 760, 360 ], "parameters": { "model": { "__rl": true, "mode": "list", "value": "gpt-4.1-mini" }, "options": [] }, "credentials": { "openAiApi": { "id": "credential-id", "name": "" } }, "typeVersion": 1.2 }, { "id": "1f1fc350-0ed9-433f-9527-a8c356f23d4a", "name": "OpenAI Chat Engine B", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [ 780, 455 ], "parameters": { "model": { "__rl": true, "mode": "list", "value": "gpt-4.1-mini" }, "options": [] }, "credentials": { "openAiApi": { "id": "credential-id", "name": "" } }, "typeVersion": 1.2 }, { "id": "21e8e7c1-4b78-46c0-98ba-24006592a737", "name": "OpenAI Chat Engine C", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "position": [ 750, 610 ], "parameters": { "model": { "__rl": true, "mode": "list", "value": "gpt-4.1-mini" }, "options": [] }, "credentials": { "openAiApi": { "id": "credential-id", "name": "" } }, "typeVersion": 1.2 }, { "id": "02e03088-7c40-421d-ab15-ccacdb2735c7", "name": "Send General Response", "type": "n8n-nodes-base.respondToWebhook", "position": [ 5, 245 ], "parameters": { "options": [] }, "typeVersion": 1.4 }, { "id": "988aa029-e110-42b3-b62e-156ee79d08c7", "name": "Send Database Response", "type": "n8n-nodes-base.respondToWebhook", "position": [ 0, 350 ], "parameters": { "options": [] }, "typeVersion": 1.4 }, { "id": "6dbf918b-fdb9-4032-95a5-ece4aee5c63f", "name": "Send Web Response", "type": "n8n-nodes-base.respondToWebhook", "position": [ 10, 485 ], "parameters": { "options": [] }, "typeVersion": 1.4 }, { "id": "477b257c-d283-4783-a505-29f0c2f50782", "name": "Send RAG Response", "type": "n8n-nodes-base.respondToWebhook", "position": [ 0, 595 ], "parameters": { "options": [] }, "typeVersion": 1.4 }, { "id": "e28c1572-dec7-4b90-be86-46b434366678", "name": "Database AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [ 565, 340 ], "parameters": { "text": "Configure Database Agent", "options": [], "promptType": "define" }, "typeVersion": 2.2 }, { "id": "1bda319b-4aff-47bd-9350-0e337ecef08e", "name": "Web AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [ 570, 500 ], "parameters": { "text": "Configure Web Agent", "options": [], "promptType": "define" }, "typeVersion": 2.2 }, { "id": "d0a6542a-bea2-472f-b254-7d30c95fc8f9", "name": "RAG AI Agent", "type": "@n8n/n8n-nodes-langchain.agent", "position": [ 555, 590 ], "parameters": { "text": "Configure Rag Agent", "options": [], "promptType": "define" }, "typeVersion": 2.2 }, { "id": "4c290b16-aac4-4dad-8e29-8df886a191f5", "name": "Format General Output", "type": "n8n-nodes-base.code", "position": [ 290, 235 ], "parameters": { "jsCode": "// R\u00e9cup\u00e8re les donn\u00e9es du webhook (le message de l'utilisateur)\nconst webhookData = $('Inbound Agent Webhook').first().json.body;\n\n// R\u00e9cup\u00e8re la r\u00e9ponse de l'agent IA\nconst aiAgentOutput = $input.first().json;\n\n// Extrait le texte de r\u00e9ponse de l'IA (les agents IA peuvent renvoyer diff\u00e9rents formats)\nconst aiResponseText = aiAgentOutput.output || \n aiAgentOutput.text || \n aiAgentOutput.response || \n aiAgentOutput.message ||\n JSON.stringify(aiAgentOutput);\n\n// Formate la r\u00e9ponse pour l'interface utilisateur\nreturn [{\n json: {\n response: aiResponseText, // La r\u00e9ponse de l'IA\n agent_type: webhookData.agent_type, // Type d'agent utilis\u00e9\n user_message: webhookData.message, // Message original de l'utilisateur\n timestamp: new Date().toISOString()\n }\n}];" }, "typeVersion": 2 }, { "id": "10e971ac-4ddd-4d4b-b746-4ea1e3adae80", "name": "Format Database Output", "type": "n8n-nodes-base.code", "position": [ 310, 385 ], "parameters": { "jsCode": "// R\u00e9cup\u00e8re les donn\u00e9es du webhook (le message de l'utilisateur)\nconst webhookData = $('Inbound Agent Webhook').first().json.body;\n\n// R\u00e9cup\u00e8re la r\u00e9ponse de l'agent IA\nconst aiAgentOutput = $input.first().json;\n\n// Extrait le texte de r\u00e9ponse de l'IA (les agents IA peuvent renvoyer diff\u00e9rents formats)\nconst aiResponseText = aiAgentOutput.output || \n aiAgentOutput.text || \n aiAgentOutput.response || \n aiAgentOutput.message ||\n JSON.stringify(aiAgentOutput);\n\n// Formate la r\u00e9ponse pour l'interface utilisateur\nreturn [{\n json: {\n response: aiResponseText, // La r\u00e9ponse de l'IA\n agent_type: webhookData.agent_type, // Type d'agent utilis\u00e9\n user_message: webhookData.message, // Message original de l'utilisateur\n timestamp: new Date().toISOString()\n }\n}];" }, "typeVersion": 2 }, { "id": "bfbd9514-cde8-4b90-bff4-af935251126d", "name": "Format Web Output", "type": "n8n-nodes-base.code", "position": [ 305, 480 ], "parameters": { "jsCode": "// R\u00e9cup\u00e8re les donn\u00e9es du webhook (le message de l'utilisateur)\nconst webhookData = $('Inbound Agent Webhook').first().json.body;\n\n// R\u00e9cup\u00e8re la r\u00e9ponse de l'agent IA\nconst aiAgentOutput = $input.first().json;\n\n// Extrait le texte de r\u00e9ponse de l'IA (les agents IA peuvent renvoyer diff\u00e9rents formats)\nconst aiResponseText = aiAgentOutput.output || \n aiAgentOutput.text || \n aiAgentOutput.response || \n aiAgentOutput.message ||\n JSON.stringify(aiAgentOutput);\n\n// Formate la r\u00e9ponse pour l'interface utilisateur\nreturn [{\n json: {\n response: aiResponseText, // La r\u00e9ponse de l'IA\n agent_type: webhookData.agent_type, // Type d'agent utilis\u00e9\n user_message: webhookData.message, // Message original de l'utilisateur\n timestamp: new Date().toISOString()\n }\n}];" }, "typeVersion": 2 }, { "id": "60586be5-536a-4561-bfbf-7643d70eedf1", "name": "Format RAG Output", "type": "n8n-nodes-base.code", "position": [ 295, 635 ], "parameters": { "jsCode": "// R\u00e9cup\u00e8re les donn\u00e9es du webhook (le message de l'utilisateur)\nconst webhookData = $('Inbound Agent Webhook').first().json.body;\n\n// R\u00e9cup\u00e8re la r\u00e9ponse de l'agent IA\nconst aiAgentOutput = $input.first().json;\n\n// Extrait le texte de r\u00e9ponse de l'IA (les agents IA peuvent renvoyer diff\u00e9rents formats)\nconst aiResponseText = aiAgentOutput.output || \n aiAgentOutput.text || \n aiAgentOutput.response || \n aiAgentOutput.message ||\n JSON.stringify(aiAgentOutput);\n\n// Formate la r\u00e9ponse pour l'interface utilisateur\nreturn [{\n json: {\n response: aiResponseText, // La r\u00e9ponse de l'IA\n agent_type: webhookData.agent_type, // Type d'agent utilis\u00e9\n user_message: webhookData.message, // Message original de l'utilisateur\n timestamp: new Date().toISOString()\n }\n}];" }, "typeVersion": 2 } ], "active": true, "pinData": [], "settings": { "executionOrder": "v1" }, "versionId": "", "connections": { "Route by Agent Type": { "main": [ [ { "node": "General AI Agent", "type": "main", "index": 0 } ], [ { "node": "Database AI Agent", "type": "main", "index": 0 } ], [ { "node": "Web AI Agent", "type": "main", "index": 0 } ], [ { "node": "RAG AI Agent", "type": "main", "index": 0 } ] ] }, "Inbound Webhook Trigger": { "main": [ [ { "node": "Generate HTML Interface", "type": "main", "index": 0 } ] ] }, "Inbound Agent Webhook": { "main": [ [ { "node": "Route by Agent Type", "type": "main", "index": 0 } ] ] }, "RAG AI Agent": { "main": [ [ { "node": "Format RAG Output", "type": "main", "index": 0 } ] ] }, "Web AI Agent": { "main": [ [ { "node": "Format Web Output", "type": "main", "index": 0 } ] ] }, "Database AI Agent": { "main": [ [ { "node": "Format Database Output", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Engine": { "ai_languageModel": [ [ { "node": "General AI Agent", "type": "ai_languageModel", "index": 0 } ] ] }, "General AI Agent": { "main": [ [ { "node": "Format General Output", "type": "main", "index": 0 } ] ] }, "Generate HTML Interface": { "main": [ [ { "node": "Return UI Response", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Engine A": { "ai_languageModel": [ [ { "node": "Database AI Agent", "type": "ai_languageModel", "index": 0 } ] ] }, "OpenAI Chat Engine B": { "ai_languageModel": [ [ { "node": "Web AI Agent", "type": "ai_languageModel", "index": 0 } ] ] }, "OpenAI Chat Engine C": { "ai_languageModel": [ [ { "node": "RAG AI Agent", "type": "ai_languageModel", "index": 0 } ] ] }, "Format General Output": { "main": [ [ { "node": "Send General Response", "type": "main", "index": 0 } ] ] }, "Format Database Output": { "main": [ [ { "node": "Send Database Response", "type": "main", "index": 0 } ] ] }, "Format Web Output": { "main": [ [ { "node": "Send Web Response", "type": "main", "index": 0 } ] ] }, "Format RAG Output": { "main": [ [ { "node": "Send RAG Response", "type": "main", "index": 0 } ] ] } } }
💬

🔓 Unlock All 10,000+ Templates Free

Get instant access to every AI workflow and prompt. One email, full access.

Join 5,000+ automation pros. No spam.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal