🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

OpenAI to Google Sheets, a live AI cost dashboard

Lisa Granqvist Partner Workflow Automation Expert

Your AI costs don’t usually “blow up” in one dramatic moment. They creep. A few extra test runs, a new model, one noisy prompt loop, and suddenly you are digging through execution logs trying to justify a bigger bill.

Marketing leads feel it when campaign experiments multiply. A product owner gets the “why did spend jump?” question on Monday. And agencies juggling multiple client automations get hit hardest. This OpenAI cost dashboard automation puts messages, tokens, and spend into one Google Sheet so you can answer budget questions fast.

You’ll see what the workflow tracks, how the dashboard gets generated, and how to adapt it to your own AI agent or RAG setup without turning it into a week-long project.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: OpenAI to Google Sheets, a live AI cost dashboard

The Problem: AI usage is hard to explain (until it’s too late)

If you’re using OpenAI inside n8n for chatbots, lead qualification, content drafts, support macros, or internal tools, you already have “the data.” It’s just scattered. Some of it lives in execution history, some is buried in a few screenshots, and some is guesswork. Then a stakeholder asks a simple question like, “Which workflow is costing us the most?” and it turns into a messy hour of hunting session IDs, comparing token counts, and trying to reconstruct what happened across runs. Honestly, it’s not just annoying. It makes you slower and more cautious with experimentation.

The friction compounds. Here’s where it breaks down.

  • Execution logs tell you what ran, but they don’t give you a clean, daily view of tokens and cost across sessions.
  • When costs spike, it’s easy to miss the real cause because prompts, models, and outputs aren’t tracked together in one place.
  • Teams end up arguing from opinions (“it must be the chatbot”) instead of looking at a single source of truth.
  • Without consistent tracking, forecasting next month’s spend turns into guesswork and conservative limits.

The Solution: A live dashboard that logs tokens and cost to Sheets

This n8n template acts like a measurement layer for your AI workflows. It captures conversation details from your AI agent runs (session ID, input, output, prompt tokens, completion tokens, total tokens, model name) and stores them as structured rows. Then it enriches those rows with model pricing so you can compute real spend per message and per session. Finally, it generates an interactive dashboard view, so you can see totals, daily charts, and prompt-versus-completion token usage without manually building reports. You can plug it into almost any AI Agent or RAG workflow in n8n, which means you stop treating cost visibility as a separate project.

The workflow starts with an incoming webhook and a chat trigger for your AI agent. From there, it logs messages into a table, polls for any rows missing usage labels, pulls execution details, calculates global cost using your pricing table, and updates the records. The dashboard HTML is generated from the stored data and returned via webhook so it can be viewed in a browser or embedded.

What You Get: Automation vs. Results

Example: What This Looks Like

Say your team runs an AI agent for lead triage and support drafts, and it averages about 200 messages per week. Manually checking usage usually means opening executions, copying token counts, and trying to total it up later (even at 2 minutes per run, that’s roughly 6 hours a week). With this workflow, logging happens automatically on each message, and the scheduled poll fills in missing cost data in the background. Your “work” becomes opening a Google Sheet or dashboard link, which is more like 2 minutes a day.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • OpenAI for chat model usage data.
  • Google Sheets to store rows and build views.
  • OpenAI API key (get it from the OpenAI API dashboard).

Skill level: Intermediate. You’ll connect credentials, create two tables/sheets, and paste a webhook URL into the right place.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

A webhook or chat trigger kicks things off. When your AI agent receives a message (or when an external system calls the webhook), the workflow starts capturing what matters: session ID, the user’s input, and the AI output.

Key fields get normalized immediately. A Set step assigns a consistent date field and prepares a run identifier, so later reporting doesn’t depend on messy timestamps or half-filled metadata.

Usage details are enriched after the fact. On a schedule, the workflow looks for rows that still need token or cost fields, pulls the execution details, then assembles prompt tokens, completion tokens, and total tokens.

Costs are computed using your pricing table. It fetches your “Model price” data, merges it with the token metrics, calculates global cost, and updates the stored records. Then the dashboard HTML is generated from the stored rows and returned as the webhook response.

You can easily modify the model list and pricing logic to match your own providers or internal chargeback rules. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Webhook Trigger

This workflow serves a dashboard over HTTP, so the webhook path and response mode must be set correctly.

  1. Open Incoming Webhook Endpoint and set Path to 176f23d4-71b3-41e0-9364-43bea6be01d3.
  2. Set Response Mode to responseNode so the workflow returns the HTML from Return Webhook Output.
  3. Confirm the connection from Incoming Webhook Endpoint to Assign Date Field.

Use the production URL from Incoming Webhook Endpoint in your browser to view the live dashboard once the workflow is active.

Step 2: Configure the Chat Trigger and AI Agent

This path handles incoming chat messages, generates an AI response, and logs the conversation.

  1. Open Incoming Chat Trigger and enable Public as true.
  2. In Conversational AI Agent, set Text to =Réponds à ce message : {{ $json.chatInput }} and keep Prompt Type as define.
  3. Open OpenAI Chat Engine and set Model to gpt-4.1-mini.
  4. Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
  5. Ensure Session Memory Buffer is connected as the memory for Conversational AI Agent; add credentials to the parent OpenAI Chat Engine if required by your setup.
  6. Confirm the parallel split: Conversational AI Agent outputs to both No-Op Gate and Capture Run Identifier in parallel.

⚠️ Common Pitfall: The Session Memory Buffer is an AI sub-node; do not add credentials to it directly—configure credentials on OpenAI Chat Engine.

Step 3: Connect Data Tables for Chat Logs and Dashboard Data

This workflow uses multiple data table nodes (6 total). Configure them by function to keep message data, pricing, and usage stats aligned.

  1. In Retrieve Message Rows, set Operation to get, Return All to true, and Data Table to Template - data (GyHAqQLTtmZbynYI).
  2. In Append Chat Log Row, map columns to expressions like ={{ $('Incoming Chat Trigger').item.json.chatInput }} and ={{ $json.id }} for executionId.
  3. In Fetch Unlabeled Rows, set filter modelName to isEmpty, and keep Return All as true.
  4. In Fetch Price Table, set Data Table to Model - Price (5tsC5vulvGwYGS2g).
  5. In Update Usage Records, set Operation to update and filter by executionId using ={{ $json.executionId }}.
  6. In Insert Pricing Row, map columns to ={{ $json.name }}, ={{ $json.promptTokensPrice }}, and ={{ $json.completionTokensPrice }}.

All data table nodes reference internal n8n data tables—no external credentials are required.

Step 4: Build the Dashboard Response

This path assembles the KPI dashboard HTML and returns it as the webhook response.

  1. In Assign Date Field, add the assignment today with value ={{ $today }}.
  2. Keep the flow: Incoming Webhook EndpointAssign Date FieldRetrieve Message RowsGenerate Dashboard HTML.
  3. In Generate Dashboard HTML, keep the provided JavaScript Code intact (it outputs an HTML dashboard as binary data).
  4. In Return Webhook Output, set Respond With to binary.

Step 5: Configure Scheduled Usage Enrichment

This scheduled path enriches chat rows with token usage and cost data.

  1. Open Scheduled Poll Trigger and set the interval to every 30 minutes.
  2. Ensure the flow is Scheduled Poll TriggerFetch Unlabeled RowsIterate Row Batches.
  3. Iterate Row Batches outputs to both No-Op Placeholder and Retrieve Execution Details for batch processing and debugging.
  4. In Retrieve Execution Details, set Execution ID to ={{ $json.executionId }} and connect credentials.
  5. Credential Required: Connect your n8nApi credentials in Retrieve Execution Details.
  6. In Assemble Token Metrics, confirm the expressions for token counts like ={{ $json.data.resultData.runData['OpenAI Chat Engine'][0].data.ai_languageModel[0][0].json.tokenUsage.totalTokens }}.
  7. Assemble Token Metrics outputs to both Fetch Price Table and Combine Model Data in parallel.
  8. In Combine Model Data, set Mode to combine and merge by namemodel_name.
  9. In Compute Global Cost, keep the JavaScript that calculates globalCost, then update records via Update Usage Records.

⚠️ Common Pitfall: If the n8n API token lacks execution read permissions, Retrieve Execution Details will fail and downstream cost calculations will be empty.

Step 6: Seed or Update the Pricing Table

This optional path is used to insert model pricing rows when needed.

  1. Review the pinned data on Prepare Price Fields to confirm your model prices (e.g., gpt-4.1-mini with prompt and completion rates).
  2. Trigger Insert Pricing Row to insert model pricing into Model - Price (5tsC5vulvGwYGS2g).
  3. Keep Prepare Price Fields as a placeholder for future transformations if you need to normalize pricing data.

The No-Op Gate and No-Op Placeholder nodes exist for debugging and can be used to pause or inspect data during setup.

Step 7: Test and Activate Your Workflow

Validate the webhook dashboard and chat logging before turning the workflow on.

  1. Click Execute Workflow and send a test message to Incoming Chat Trigger to verify Conversational AI Agent responses and Append Chat Log Row inserts.
  2. Call the Incoming Webhook Endpoint URL in a browser and confirm a full HTML dashboard is returned by Return Webhook Output.
  3. Wait for the next Scheduled Poll Trigger run and verify that Update Usage Records populates token and cost fields.
  4. Once tests succeed, toggle the workflow to Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials screen and the target Sheet sharing settings first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Frequently Asked Questions

How long does it take to set up this OpenAI cost dashboard automation?

About 30 minutes if your Sheets and API keys are ready.

Do I need coding skills to automate OpenAI cost dashboard reporting?

No. You’ll mostly connect accounts and map a few fields in Set and Data Table nodes.

Is n8n free to use for this OpenAI cost dashboard workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which depend on the model and token volume.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this OpenAI cost dashboard workflow for per-client tracking?

Yes, and it’s a common tweak. Add a clientId (or project) field where the workflow appends the chat log row, then carry it through the “Assemble Token Metrics” and update steps so your Sheet can pivot by client. Many teams also customize the pricing table to apply different rates per client or environment (prod vs. staging).

Why is my OpenAI connection failing in this workflow?

Most of the time it’s an API key issue or the wrong org/project selected in your OpenAI credential. Regenerate the key, update it in n8n, and rerun a single test execution to confirm tokens are being returned. If it fails only under load, you may be hitting rate limits, so slow the schedule or batch size a bit.

How many messages can this OpenAI cost dashboard automation handle?

A lot. On n8n Cloud Starter you’re limited by monthly executions, while self-hosting has no execution cap (it mostly depends on your server size and how often you poll for updates). Practically, teams logging a few thousand messages a week are fine as long as you batch updates and don’t fetch execution details one-by-one all day.

Is this OpenAI cost dashboard automation better than using Zapier or Make?

Often, yes. This workflow relies on scheduled polling, pulling execution details, and updating stored rows after enrichment, which is where Zapier/Make scenarios can get expensive or awkward. n8n also gives you self-hosting for unlimited executions and more flexible branching when you want to treat “missing pricing” differently from “missing tokens.” If you only need a simple “log one response to a sheet” flow, Zapier or Make will feel quicker. For a real dashboard pipeline, n8n is usually the better fit. Talk to an automation expert if you want a second opinion.

Once this is running, cost visibility stops being a fire drill. The workflow tracks the repetitive stuff, and you get a clean view you can actually use.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal