🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

ServiceNow + OpenAI: instant KB answers in chat

Lisa Granqvist Partner Workflow Automation Expert

Your ServiceNow knowledge base is full of answers. People still ask the same questions in chat, DMs, and tickets because searching feels slow, confusing, or pointless.

This is what burns IT support leads first, but operations managers and internal enablement teams get dragged in too. With this ServiceNow OpenAI chat automation, you turn existing KB articles into fast, consistent chat replies, which means fewer repeat pings and cleaner ticket queues.

Below, you’ll see how the workflow indexes your ServiceNow KB into a searchable vector database, then uses OpenAI to answer chat questions using that content.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: ServiceNow + OpenAI: instant KB answers in chat

The Problem: Your KB Exists, But It’s Not “In the Moment”

ServiceNow knowledge articles are usually written with good intentions, then buried behind a search experience most employees don’t trust. So they do the easy thing: ask in chat. The result is a steady drip of interruptions that feels small in the moment, but it stacks up fast across a week. Support reps end up retyping the same “official” answer, managers get pulled into escalations that should never exist, and tiny inconsistencies creep in (“we changed that policy last month”). Honestly, it’s not a knowledge problem. It’s a delivery problem.

Here’s where it breaks down in real teams.

  • People ask in chat because KB search takes too many clicks, especially on mobile or during an outage.
  • Support answers drift over time, so employees stop believing the KB is the source of truth.
  • New hires don’t know what to search for, so they ask broad questions and get broad, unhelpful replies.
  • Every repeated question steals focus, and the backlog grows while your best people are stuck copy-pasting.

The Solution: ServiceNow KB Answers, Delivered Instantly in Chat

This n8n workflow turns your ServiceNow Knowledge Article table into a “chat-ready” brain, then uses OpenAI to respond to questions with the right context. It starts by fetching many KB records from ServiceNow, cleaning and structuring them for AI, and splitting long articles into smaller chunks that are easier to search. Next, OpenAI generates embeddings (a numeric representation of meaning) for each chunk, and the workflow stores them in Qdrant, a vector database designed for fast similarity search. On the chat side, when someone sends a message, the workflow embeds the question, retrieves the most relevant KB chunks from Qdrant, and hands that context to an OpenAI chat model through an AI Agent. The user gets a clear answer in the chat flow, without anyone hunting for links.

The workflow has two “lanes.” One lane ingests and indexes your ServiceNow KB on demand (manual start). The other lane answers incoming chat messages using retrieval from Qdrant plus OpenAI reasoning, with a small memory buffer to keep the conversation coherent.

What You Get: Automation vs. Results

Example: What This Looks Like

Say your team gets 20 repeat questions a day in chat (“VPN not working,” “how do I reset MFA,” “where’s the policy”). A human answer is rarely just one message, so call it 5 minutes each on average, which is about 100 minutes daily. With this workflow, the “work” becomes: send the question (seconds), let retrieval + generation run (usually under a minute), then the user gets the reply. That’s roughly 1.5 hours back per day, without asking anyone to change tools or behavior.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • ServiceNow to access your Knowledge Article records.
  • OpenAI API for embeddings and chat responses.
  • Qdrant to store and search KB embeddings.

Skill level: Intermediate. You’ll connect accounts, add API keys, and understand which ServiceNow table/fields you’re indexing.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

Manual indexing run. You click execute, and n8n fetches many records from the ServiceNow Knowledge Article table so you can build (or rebuild) the search index when you need it.

Content preparation for AI. The workflow loads each article into a consistent document format, then splits large articles into smaller segments so retrieval doesn’t miss the relevant paragraph buried halfway down.

Embeddings + storage. OpenAI turns each text segment into an embedding, and Qdrant stores those vectors along with metadata like article identifiers and titles for later lookup.

Chat question answering. When a message arrives, the workflow embeds the question, uses Qdrant to retrieve the closest KB chunks, and the AI Agent composes a reply using the OpenAI chat model plus a short conversation memory buffer.

You can easily modify which ServiceNow knowledge bases get indexed and how many chunks are retrieved for each answer based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Webhook Trigger

Set up the live chat entry point that kicks off the AI response flow.

  1. Add the Incoming Chat Trigger node to your workflow.
  2. Set Mode to webhook.
  3. Set Public to true and Authentication to basicAuth.
  4. Credential Required: Connect your httpBasicAuth credentials in Incoming Chat Trigger.

Step 2: Connect ServiceNow

Load knowledge base articles from ServiceNow to build the semantic index.

  1. Add the Manual Start Trigger node to manually run ingestion.
  2. Add the Fetch ServiceNow Records node and connect it to Manual Start Trigger.
  3. Set Resource to tableRecord and Operation to getAll.
  4. Set Return All to true and Table Name to kb_knowledge.
  5. Set Authentication to basicAuth and add sysparm_fields to include number, short_description, and text.
  6. Credential Required: Connect your serviceNowBasicApi credentials in Fetch ServiceNow Records.

Step 3: Set Up the Knowledge Indexing Pipeline

Transform ServiceNow articles into embeddings and store them in Qdrant for retrieval.

  1. Add Recursive Text Segmenter and set Chunk Size to 500 and Chunk Overlap to 50.
  2. Add Standard Data Loader and connect it so Recursive Text Segmenter feeds into it as the text splitter.
  3. Add OpenAI Embedding Generator to generate embeddings for the index.
  4. Add Qdrant Vector Index and set Mode to insert and Qdrant Collection to rag_collection.
  5. Connect Fetch ServiceNow RecordsQdrant Vector Index so ingestion flows into storage.
  6. Credential Required: Connect your openAiApi credentials in OpenAI Embedding Generator.
  7. Credential Required: Connect your qdrantApi credentials in Qdrant Vector Index.

Tip: Run the ingestion path with Manual Start Trigger any time your ServiceNow knowledge base changes.

Step 4: Configure Retrieval and AI Response

Wire the conversational agent to Qdrant retrieval and memory for grounded responses.

  1. Add Qdrant Retrieval Tool and set Mode to retrieve-as-tool, Top K to 10, Tool Name to retriever, and Tool Description to Retrieve data from a semantic database to answer questions.
  2. Add OpenAI Embedding Builder and connect it as the embedding source for Qdrant Retrieval Tool.
  3. Add OpenAI Chat Engine and set Model to gpt-4.1-mini.
  4. Add Context Memory Buffer and connect it to Conversational AI Agent as the memory input.
  5. Add Conversational AI Agent and set its System Message to the full instruction block from the node so it always retrieves and cites articles.
  6. Connect Incoming Chat TriggerConversational AI Agent as shown in the execution flow.
  7. Credential Required: Connect your qdrantApi credentials in Qdrant Retrieval Tool.
  8. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine and OpenAI Embedding Builder.

⚠️ Common Pitfall: Context Memory Buffer, OpenAI Embedding Generator, and OpenAI Embedding Builder are AI sub-nodes; credentials should be added to their parent AI nodes (e.g., OpenAI Chat Engine for the language model and the Qdrant nodes for embeddings).

Step 5: Test and Activate Your Workflow

Validate ingestion and chat response before turning the workflow on.

  1. Click Execute Workflow on Manual Start Trigger to ingest ServiceNow articles into Qdrant Vector Index.
  2. Send a test request to the Incoming Chat Trigger webhook URL using the configured basic auth credentials.
  3. Confirm that Conversational AI Agent calls Qdrant Retrieval Tool and returns an answer grounded in the KB content with article references.
  4. When the test is successful, toggle the workflow Active to enable production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • ServiceNow credentials can expire or need specific permissions. If things break, check the ServiceNow user’s roles and the integration user’s API access first.
  • If you’re indexing a lot of long articles, embedding and Qdrant writes can take time. If downstream nodes fail on empty results, increase any wait/retry behavior and watch for rate limits on the OpenAI side.
  • Default prompts in AI nodes are generic. Add your support tone, escalation rules, and “when to link the KB vs. summarize it” early or you will be polishing answers forever.

Frequently Asked Questions

How long does it take to set up this ServiceNow OpenAI chat automation?

About 60–90 minutes if your ServiceNow and OpenAI access is ready.

Do I need coding skills to automate ServiceNow KB chat answers?

No. You will mostly connect accounts and paste API keys. The “hard part” is deciding which KB content should be indexed.

Is n8n free to use for this ServiceNow OpenAI chat workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage (often a few dollars a month for internal support volumes) plus Qdrant hosting if you don’t run it yourself.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this ServiceNow OpenAI chat workflow for multiple knowledge bases?

Yes, but you’ll want to be intentional. Most teams filter which ServiceNow knowledge bases (or article states) get pulled in the “Fetch ServiceNow Records” node, then store metadata in Qdrant so retrieval can prefer the right source. You can also adjust the text splitting behavior in the “Recursive Text Segmenter” node if your articles are short, or if they contain big tables that don’t chunk cleanly. Finally, tune how many results Qdrant returns in the “Qdrant Retrieval Tool” so the chat model doesn’t get overwhelmed with context.

Why is my ServiceNow connection failing in this workflow?

Usually it’s permissions or an expired credential on the ServiceNow integration user. Confirm the account can read the Knowledge Article table you’re querying, then re-authenticate in n8n and try a small fetch to validate. If it works in small batches but fails at scale, look for API limits or a query that’s pulling too many records at once.

How many knowledge articles can this ServiceNow OpenAI chat automation handle?

A lot.

Is this ServiceNow OpenAI chat automation better than using Zapier or Make?

For RAG-style workflows, yes, most of the time. You’re doing embeddings, chunking, vector search, and an agent-style chat response, which is more than a simple “if X then Y” integration. n8n handles branching and data shaping without turning every extra step into a cost decision, and self-hosting is an option if volume grows. Zapier or Make can be fine for lightweight routing, but they’re not built around vector retrieval. If you’re on the fence, Talk to an automation expert and you’ll get a straight answer.

Once this is running, your ServiceNow KB stops being a dusty archive and starts acting like a real-time teammate. Set it up, keep your articles updated, and enjoy the quiet.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal