🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

GitHub Models + OpenAI, swap LLMs without rewrites

Lisa Granqvist Partner Workflow Automation Expert

Your prompts work. Your workflow works. Then someone says “let’s try a new model,” and suddenly you’re touching half a dozen nodes, swapping credentials, and praying you didn’t break something subtle.

Marketing ops teams feel it when experiments stall. Agency owners feel it when every client wants a different model. And product leads feel it when “quick testing” turns into an engineering ticket. This GitHub Models integration automation keeps your OpenAI-style prompts stable while you switch models behind the scenes.

This workflow builds a small compatibility layer in n8n. You will see how it routes “list models” and “chat completion” requests to GitHub Models, then returns responses in the shape your existing OpenAI-based nodes already expect.

How This Automation Works

See how this solves the problem:

n8n Workflow Template: GitHub Models + OpenAI, swap LLMs without rewrites

The Challenge: Switching LLMs Breaks Working Workflows

Model testing sounds simple until you do it more than once. One workflow might have an AI Agent node, another uses a straight chat model, and a third has a chain that depends on a specific response structure. When you swap providers or endpoints, you can end up rewriting prompts, remapping fields, and debugging tiny differences that are honestly hard to spot. It’s not the “one change,” it’s the ripple effect: people stop experimenting because it’s risky, slow, and distracting.

The friction compounds. Here’s where it breaks down in real teams.

  • Every model test turns into workflow edits, so experiments pile up and rarely ship.
  • Prompts get duplicated across nodes, which means consistency drops over time.
  • Small response-format differences create silent failures, like missing fields or empty replies.
  • You lose a “known good” baseline because the original setup gets overwritten instead of isolated.

The Fix: A Custom OpenAI-Compatible Endpoint for GitHub Models

This n8n workflow acts like a translator between your existing OpenAI-style LLM nodes and GitHub Models. You configure a new OpenAI credential in n8n, but instead of pointing it at OpenAI, you set the Base URL to an n8n webhook from this template. From that point on, your LLM node “thinks” it’s talking to an OpenAI-compatible API. Behind the scenes, the workflow receives two types of requests: one for listing models and another for chat completions. It forwards those requests to GitHub Models using HTTP calls, then reshapes the responses so your nodes keep working without prompt refactors.

The workflow starts when your LLM node asks for available models or sends a chat completion request. n8n calls GitHub Models, normalizes the output, and responds back to the LLM node in the format it already expects. If streaming is involved, the workflow checks for it and returns the right kind of reply.

What Changes: Before vs. After

Real-World Impact

Say you have 6 AI-powered workflows in n8n (content briefs, support replies, internal QA, and so on). Manually swapping models often means opening each workflow, updating credentials, checking the model name, and running a quick test, maybe 10 minutes each. That’s about an hour every time you want to experiment. With this setup, you change the Base URL once and pick a different GitHub model, then your existing nodes keep sending the same OpenAI-style requests. For many teams, testing drops to about 10 minutes total.

Requirements

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • GitHub account for access to GitHub Models.
  • HTTP Request access in n8n to call the GitHub Models API.
  • GitHub credential/token (generate it in GitHub settings for developer tokens).

Skill level: Intermediate. You’ll paste credentials, set a Base URL, and verify two webhooks respond correctly.

Need help implementing this? Talk to an automation expert (free 15-minute consultation).

The Workflow Flow

A chat message kicks things off. The Chat Message Trigger starts the flow when a user asks something, then passes that message into an LLM chain that uses an OpenAI-style chat model connection.

The LLM node calls your n8n webhook instead of OpenAI. You create a custom OpenAI credential where the Base URL points to this workflow’s webhook endpoints. From the LLM node’s perspective, it’s business as usual.

Two webhooks handle the “API surface.” One webhook responds to model discovery (“models”) by calling GitHub’s model catalog endpoint, combining the items, and returning a compatible model list. The other webhook handles chat completions by sending the prompt payload to GitHub’s chat completion endpoint via HTTP Request.

The workflow returns a response in the exact shape your nodes expect. An If check (Agent Stream Check) decides whether to respond with a streamed reply or a standard webhook response, so your downstream logic doesn’t get surprised.

You can easily modify which GitHub model you default to and how you map the response fields based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Webhook Trigger

This workflow exposes two webhook endpoints for model listing and chat completions, plus a chat trigger for LLM chain processing.

  1. Open Model List Webhook and set Path to github-models/models.
  2. In Model List Webhook, set Response Mode to responseNode so Return Model List can respond.
  3. Open Chat Completion Webhook and set Path to github-models/chat/completions and HTTP Method to POST.
  4. In Chat Completion Webhook, set Response Mode to responseNode so responses flow to Streamed Agent Reply or Standard Chat Reply.
  5. Keep Chat Message Trigger enabled to initiate the chain through LLM Chain Processor.
Use the webhook test URLs in each webhook node to validate connectivity before integrating external clients.

Step 2: Connect GitHub API and Configure HTTP Requests

These nodes call GitHub’s models catalog and inference endpoints.

  1. Open Model Catalog Request and confirm URL is https://models.github.ai/catalog/models.
  2. In Model Catalog Request, keep Authentication set to predefinedCredentialType and Node Credential Type to githubApi.
  3. Credential Required: Connect your githubApi credentials in Model Catalog Request.
  4. Open Chat Completion API Call and set URL to https://models.github.ai/inference/chat/completions and Method to POST.
  5. Set JSON Body in Chat Completion API Call to ={{ { model: $json.body.model, messages: $json.body.messages, stream: $json.body.stream } }}.
  6. Credential Required: Connect your githubApi credentials in Chat Completion API Call.
⚠️ Common Pitfall: If the GitHub API credentials lack access to the models endpoints, both catalog and chat completion requests will fail with authorization errors.

Step 3: Set Up the LLM Chain Processor

The chain uses the OpenAI-compatible model configured in Webhook LLM Connector.

  1. Open Webhook LLM Connector and set the Model to openai/gpt-4o-mini.
  2. Credential Required: Connect your openAiApi credentials in Webhook LLM Connector.
  3. Ensure Webhook LLM Connector is linked as the language model for LLM Chain Processor via the AI connection.
The language model credentials are added to Webhook LLM Connector, not to LLM Chain Processor.

Step 4: Configure Aggregation, Routing, and Webhook Responses

This step controls how model lists are returned and how chat completion responses are streamed or returned as JSON.

  1. In Combine Model Items, set Aggregate to aggregateAllItemData to collect all catalog items.
  2. In Return Model List, set Respond With to json and Response Body to ={{ ({ "object": "list", "data": $json.data.map(item => ({ "id": item.id, "object": "model", "created": 1733945430, "owned_by": "system" })) }) }}.
  3. Open Agent Stream Check and ensure the condition checks ={{ $('Chat Completion Webhook').first().json.body.stream }} for a true boolean.
  4. Set Streamed Agent Reply to Respond With text and Response Body to ={{ $json.data }} for streaming outputs.
  5. Set Standard Chat Reply to Respond With json and Response Body to ={{ $json }} for non-streamed responses.
⚠️ Common Pitfall: If the request body omits stream, Agent Stream Check routes to Standard Chat Reply by default, which may not match streaming clients.

Step 5: Test and Activate Your Workflow

Validate each path using the webhook test URLs and then enable the workflow.

  1. Click Execute Workflow and send a test request to Model List Webhook; confirm Return Model List responds with a model list object.
  2. Send a POST request to Chat Completion Webhook with a JSON body that includes model, messages, and stream; verify routing through Chat Completion API Call and Agent Stream Check.
  3. If stream is true, confirm Streamed Agent Reply returns text; if false, confirm Standard Chat Reply returns JSON.
  4. Once validated, toggle the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Watch Out For

  • GitHub credentials can expire or need specific permissions. If things break, check your GitHub token scopes and the credential test in n8n first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Common Questions

How quickly can I implement this GitHub Models integration automation?

About 30 minutes if your GitHub credentials are ready.

Can non-technical teams implement this GitHub Models integration?

Yes, but someone needs to be comfortable pasting API credentials and testing a webhook response. No coding, just careful configuration.

Is n8n free to use for this GitHub Models integration workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in GitHub Models limits (GitHub notes the free model APIs aren’t intended for production use).

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

How do I adapt this GitHub Models integration solution to my specific challenges?

You can keep the same OpenAI-style credential and swap what happens behind the webhook. Common tweaks include changing how the “Model Catalog Request” maps model IDs, adjusting the “Chat Completion API Call” payload fields, and altering the streaming behavior by editing the “Agent Stream Check” logic to match your preferred response format.

Why is my GitHub connection failing in this workflow?

Usually it’s an expired token or missing permissions on the GitHub credential used by the HTTP Request nodes. Double-check the credential attached to “Model Catalog Request” and “Chat Completion API Call,” then re-test from the webhook URL to confirm you’re getting a real response. If you’re sending lots of requests, you might also be hitting GitHub’s intended-for-prototyping limits, which can look like intermittent failures.

What’s the capacity of this GitHub Models integration solution?

It depends more on your hosting and GitHub’s limits than the workflow itself. On n8n Cloud, higher tiers handle more monthly executions, and self-hosting has no execution cap (your server becomes the limit). Practically, this pattern is fine for steady internal usage, but GitHub explicitly positions these model APIs for prototyping rather than production-scale traffic. If you need high volume, keep the same pattern but point your Base URL to a paid provider with stronger rate limits.

Is this GitHub Models integration automation better than using Zapier or Make?

Often, yes. You’re essentially building an OpenAI-compatible “shim” with multiple webhooks, conditional streaming behavior, and response remapping, and n8n handles that kind of logic cleanly in one place. Zapier and Make can do webhooks too, but this style of request/response proxying gets awkward fast, and costs tend to climb when you add branching and higher task volume. If you only need a simple “send prompt, get response” call, those tools are fine. If you want to swap models without rewiring multiple workflows, n8n is a more natural fit. Talk to an automation expert if you’re not sure which fits.

Once this is in place, model testing stops being a rewrite project. You keep your workflows steady, and you get your experimentation speed back.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Launch login modal Launch register modal