OpenAI to Google Sheets, a live AI cost dashboard
Your AI costs don’t usually “blow up” in one dramatic moment. They creep. A few extra test runs, a new model, one noisy prompt loop, and suddenly you are digging through execution logs trying to justify a bigger bill.
Marketing leads feel it when campaign experiments multiply. A product owner gets the “why did spend jump?” question on Monday. And agencies juggling multiple client automations get hit hardest. This OpenAI cost dashboard automation puts messages, tokens, and spend into one Google Sheet so you can answer budget questions fast.
You’ll see what the workflow tracks, how the dashboard gets generated, and how to adapt it to your own AI agent or RAG setup without turning it into a week-long project.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: OpenAI to Google Sheets, a live AI cost dashboard
flowchart LR
subgraph sg0["Schedule Flow"]
direction LR
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Model/Token Info", pos: "b", h: 48 }
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/n8n.svg' width='40' height='40' /></div><br/>Get an execution"]
n8@{ icon: "mdi:play-circle", form: "rounded", label: "Schedule Trigger", pos: "b", h: 48 }
n9@{ icon: "mdi:cog", form: "rounded", label: "Get row(s)", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "Update row(s)", pos: "b", h: 48 }
n15@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n16@{ icon: "mdi:cog", form: "rounded", label: "No Operation, do nothing", pos: "b", h: 48 }
n18@{ icon: "mdi:cog", form: "rounded", label: "Get row(s)3", pos: "b", h: 48 }
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge1"]
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code in JavaScript1"]
n19 --> n20
n9 --> n15
n18 --> n19
n10 --> n15
n15 --> n16
n15 --> n7
n7 --> n5
n5 --> n18
n5 --> n19
n8 --> n9
n20 --> n10
end
subgraph sg1["When chat message received Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When chat message received", pos: "b", h: 48 }
n1@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n2@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n3@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Get Excution ID", pos: "b", h: 48 }
n6@{ icon: "mdi:cog", form: "rounded", label: "Insert row2", pos: "b", h: 48 }
n21@{ icon: "mdi:cog", form: "rounded", label: "No Operation, do nothing1", pos: "b", h: 48 }
n1 --> n21
n1 --> n4
n2 -.-> n1
n4 --> n6
n3 -.-> n1
n0 --> n1
end
subgraph sg2["Flow 3"]
direction LR
n11@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Get row(s)1", pos: "b", h: 48 }
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook"]
n22["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code in JavaScript"]
n23["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Respond to Webhook"]
n17 --> n11
n11 --> n13
n13 --> n22
n22 --> n23
end
subgraph sg3["Flow 4"]
direction LR
n12@{ icon: "mdi:cog", form: "rounded", label: "Insert row1", pos: "b", h: 48 }
n14@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields1", pos: "b", h: 48 }
n12 --> n14
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n8,n0 trigger
class n1 ai
class n3 aiModel
class n2 ai
class n17,n23 api
class n20,n22 code
classDef customIcon fill:none,stroke:none
class n7,n19,n20,n17,n22,n23 customIcon
The Problem: AI usage is hard to explain (until it’s too late)
If you’re using OpenAI inside n8n for chatbots, lead qualification, content drafts, support macros, or internal tools, you already have “the data.” It’s just scattered. Some of it lives in execution history, some is buried in a few screenshots, and some is guesswork. Then a stakeholder asks a simple question like, “Which workflow is costing us the most?” and it turns into a messy hour of hunting session IDs, comparing token counts, and trying to reconstruct what happened across runs. Honestly, it’s not just annoying. It makes you slower and more cautious with experimentation.
The friction compounds. Here’s where it breaks down.
- Execution logs tell you what ran, but they don’t give you a clean, daily view of tokens and cost across sessions.
- When costs spike, it’s easy to miss the real cause because prompts, models, and outputs aren’t tracked together in one place.
- Teams end up arguing from opinions (“it must be the chatbot”) instead of looking at a single source of truth.
- Without consistent tracking, forecasting next month’s spend turns into guesswork and conservative limits.
The Solution: A live dashboard that logs tokens and cost to Sheets
This n8n template acts like a measurement layer for your AI workflows. It captures conversation details from your AI agent runs (session ID, input, output, prompt tokens, completion tokens, total tokens, model name) and stores them as structured rows. Then it enriches those rows with model pricing so you can compute real spend per message and per session. Finally, it generates an interactive dashboard view, so you can see totals, daily charts, and prompt-versus-completion token usage without manually building reports. You can plug it into almost any AI Agent or RAG workflow in n8n, which means you stop treating cost visibility as a separate project.
The workflow starts with an incoming webhook and a chat trigger for your AI agent. From there, it logs messages into a table, polls for any rows missing usage labels, pulls execution details, calculates global cost using your pricing table, and updates the records. The dashboard HTML is generated from the stored data and returned via webhook so it can be viewed in a browser or embedded.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team runs an AI agent for lead triage and support drafts, and it averages about 200 messages per week. Manually checking usage usually means opening executions, copying token counts, and trying to total it up later (even at 2 minutes per run, that’s roughly 6 hours a week). With this workflow, logging happens automatically on each message, and the scheduled poll fills in missing cost data in the background. Your “work” becomes opening a Google Sheet or dashboard link, which is more like 2 minutes a day.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- OpenAI for chat model usage data.
- Google Sheets to store rows and build views.
- OpenAI API key (get it from the OpenAI API dashboard).
Skill level: Intermediate. You’ll connect credentials, create two tables/sheets, and paste a webhook URL into the right place.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A webhook or chat trigger kicks things off. When your AI agent receives a message (or when an external system calls the webhook), the workflow starts capturing what matters: session ID, the user’s input, and the AI output.
Key fields get normalized immediately. A Set step assigns a consistent date field and prepares a run identifier, so later reporting doesn’t depend on messy timestamps or half-filled metadata.
Usage details are enriched after the fact. On a schedule, the workflow looks for rows that still need token or cost fields, pulls the execution details, then assembles prompt tokens, completion tokens, and total tokens.
Costs are computed using your pricing table. It fetches your “Model price” data, merges it with the token metrics, calculates global cost, and updates the stored records. Then the dashboard HTML is generated from the stored rows and returned as the webhook response.
You can easily modify the model list and pricing logic to match your own providers or internal chargeback rules. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
This workflow serves a dashboard over HTTP, so the webhook path and response mode must be set correctly.
- Open Incoming Webhook Endpoint and set Path to
176f23d4-71b3-41e0-9364-43bea6be01d3. - Set Response Mode to
responseNodeso the workflow returns the HTML from Return Webhook Output. - Confirm the connection from Incoming Webhook Endpoint to Assign Date Field.
Step 2: Configure the Chat Trigger and AI Agent
This path handles incoming chat messages, generates an AI response, and logs the conversation.
- Open Incoming Chat Trigger and enable Public as
true. - In Conversational AI Agent, set Text to
=Réponds à ce message : {{ $json.chatInput }}and keep Prompt Type asdefine. - Open OpenAI Chat Engine and set Model to
gpt-4.1-mini. - Credential Required: Connect your
openAiApicredentials in OpenAI Chat Engine. - Ensure Session Memory Buffer is connected as the memory for Conversational AI Agent; add credentials to the parent OpenAI Chat Engine if required by your setup.
- Confirm the parallel split: Conversational AI Agent outputs to both No-Op Gate and Capture Run Identifier in parallel.
Step 3: Connect Data Tables for Chat Logs and Dashboard Data
This workflow uses multiple data table nodes (6 total). Configure them by function to keep message data, pricing, and usage stats aligned.
- In Retrieve Message Rows, set Operation to
get, Return All totrue, and Data Table toTemplate - data(GyHAqQLTtmZbynYI). - In Append Chat Log Row, map columns to expressions like
={{ $('Incoming Chat Trigger').item.json.chatInput }}and={{ $json.id }}for executionId. - In Fetch Unlabeled Rows, set filter modelName to
isEmpty, and keep Return All astrue. - In Fetch Price Table, set Data Table to
Model - Price(5tsC5vulvGwYGS2g). - In Update Usage Records, set Operation to
updateand filter by executionId using={{ $json.executionId }}. - In Insert Pricing Row, map columns to
={{ $json.name }},={{ $json.promptTokensPrice }}, and={{ $json.completionTokensPrice }}.
Step 4: Build the Dashboard Response
This path assembles the KPI dashboard HTML and returns it as the webhook response.
- In Assign Date Field, add the assignment today with value
={{ $today }}. - Keep the flow: Incoming Webhook Endpoint → Assign Date Field → Retrieve Message Rows → Generate Dashboard HTML.
- In Generate Dashboard HTML, keep the provided JavaScript Code intact (it outputs an HTML dashboard as binary data).
- In Return Webhook Output, set Respond With to
binary.
Step 5: Configure Scheduled Usage Enrichment
This scheduled path enriches chat rows with token usage and cost data.
- Open Scheduled Poll Trigger and set the interval to every
30minutes. - Ensure the flow is Scheduled Poll Trigger → Fetch Unlabeled Rows → Iterate Row Batches.
- Iterate Row Batches outputs to both No-Op Placeholder and Retrieve Execution Details for batch processing and debugging.
- In Retrieve Execution Details, set Execution ID to
={{ $json.executionId }}and connect credentials. - Credential Required: Connect your
n8nApicredentials in Retrieve Execution Details. - In Assemble Token Metrics, confirm the expressions for token counts like
={{ $json.data.resultData.runData['OpenAI Chat Engine'][0].data.ai_languageModel[0][0].json.tokenUsage.totalTokens }}. - Assemble Token Metrics outputs to both Fetch Price Table and Combine Model Data in parallel.
- In Combine Model Data, set Mode to
combineand merge by name ↔ model_name. - In Compute Global Cost, keep the JavaScript that calculates
globalCost, then update records via Update Usage Records.
Step 6: Seed or Update the Pricing Table
This optional path is used to insert model pricing rows when needed.
- Review the pinned data on Prepare Price Fields to confirm your model prices (e.g.,
gpt-4.1-miniwith prompt and completion rates). - Trigger Insert Pricing Row to insert model pricing into
Model - Price(5tsC5vulvGwYGS2g). - Keep Prepare Price Fields as a placeholder for future transformations if you need to normalize pricing data.
Step 7: Test and Activate Your Workflow
Validate the webhook dashboard and chat logging before turning the workflow on.
- Click Execute Workflow and send a test message to Incoming Chat Trigger to verify Conversational AI Agent responses and Append Chat Log Row inserts.
- Call the Incoming Webhook Endpoint URL in a browser and confirm a full HTML dashboard is returned by Return Webhook Output.
- Wait for the next Scheduled Poll Trigger run and verify that Update Usage Records populates token and cost fields.
- Once tests succeed, toggle the workflow to Active for production use.
Common Gotchas
- Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials screen and the target Sheet sharing settings first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your Sheets and API keys are ready.
No. You’ll mostly connect accounts and map a few fields in Set and Data Table nodes.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which depend on the model and token volume.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s a common tweak. Add a clientId (or project) field where the workflow appends the chat log row, then carry it through the “Assemble Token Metrics” and update steps so your Sheet can pivot by client. Many teams also customize the pricing table to apply different rates per client or environment (prod vs. staging).
Most of the time it’s an API key issue or the wrong org/project selected in your OpenAI credential. Regenerate the key, update it in n8n, and rerun a single test execution to confirm tokens are being returned. If it fails only under load, you may be hitting rate limits, so slow the schedule or batch size a bit.
A lot. On n8n Cloud Starter you’re limited by monthly executions, while self-hosting has no execution cap (it mostly depends on your server size and how often you poll for updates). Practically, teams logging a few thousand messages a week are fine as long as you batch updates and don’t fetch execution details one-by-one all day.
Often, yes. This workflow relies on scheduled polling, pulling execution details, and updating stored rows after enrichment, which is where Zapier/Make scenarios can get expensive or awkward. n8n also gives you self-hosting for unlimited executions and more flexible branching when you want to treat “missing pricing” differently from “missing tokens.” If you only need a simple “log one response to a sheet” flow, Zapier or Make will feel quicker. For a real dashboard pipeline, n8n is usually the better fit. Talk to an automation expert if you want a second opinion.
Once this is running, cost visibility stops being a fire drill. The workflow tracks the repetitive stuff, and you get a clean view you can actually use.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.