Google Sheets + OpenAI: track token costs per run
You run an AI workflow, it works, and then the bill shows up later with zero context. No idea which run spiked. No clue which model did the damage. Just a monthly charge and a vague sense you should “optimize prompts” somehow.
This OpenAI cost tracking problem hits automation specialists first, honestly. But agency owners and product teams shipping AI features feel it fast too, because clients and stakeholders always ask the same question: “What did that run cost?”
This n8n workflow logs token usage and estimated cost into Google Sheets for every execution, so you can see totals, models, and trends without building a whole finance system.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Google Sheets + OpenAI: track token costs per run
flowchart LR
subgraph sg0["When clicking ‘Test workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Test workflow’", pos: "b", h: 48 }
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/n8n.svg' width='40' height='40' /></div><br/>n8n Get Execution Data"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Get AI Usage Data"]
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set Ai_Run_Data", pos: "b", h: 48 }
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get AI Pricing"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Get Models Price and Add Sum.."]
n6@{ icon: "mdi:database", form: "rounded", label: "Google Sheets", pos: "b", h: 48 }
n7@{ icon: "mdi:play-circle", form: "rounded", label: "When Executed by Another Wor..", pos: "b", h: 48 }
n4 --> n5
n3 --> n4
n2 --> n3
n1 --> n2
n5 --> n6
n7 --> n1
n0 --> n1
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n7 trigger
class n6 database
class n4 api
class n2,n5 code
classDef customIcon fill:none,stroke:none
class n1,n2,n4,n5 customIcon
The Problem: Token spend is invisible until it hurts
Most AI workflows don’t fail loudly when they get expensive. They succeed. Quietly. A few extra tool calls here, a longer completion there, a different model chosen “just for this test,” and suddenly you’ve got dozens (or hundreds) of executions with no per-run cost trail. When someone asks why costs jumped last week, you’re stuck guessing: was it the prompt change, the model switch, the new client, or that one workflow looping unexpectedly? Manual tracking doesn’t survive production, and “we’ll check the provider dashboard” usually comes too late to prevent the spend.
The friction compounds. Here’s where it breaks down.
- You can’t tie provider spend back to a specific n8n execution ID, so debugging cost spikes turns into detective work.
- Prompt tokens and completion tokens get mixed together, which makes it hard to know what to optimize.
- Different models have different pricing, and teams often don’t log which model was used per run.
- Without a simple log in a tool everyone already uses, cost monitoring becomes “someone’s job” and then nobody does it.
The Solution: Log token usage and cost to Google Sheets automatically
This workflow acts like a utility you can attach to any AI-driven n8n automation. It’s designed to be called by another workflow (typically at the end of a run) and it uses the execution ID to pull metadata about what happened. From there, it derives prompt and completion token usage, identifies the models and tools involved, and calculates an estimated cost based on pricing rules. If you want, it can also pull live pricing via an HTTP request to a pricing source (the template references Jina’s API as an optional helper). Finally, it writes a clean row into Google Sheets, complete with timestamp, token totals, model info, and cost.
The workflow starts when it’s triggered via an Execute Workflow call (or a manual test run). It then fetches the run metadata, derives usage, and enriches it with pricing. The last step updates your Google Sheet so the “cost story” is visible immediately.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you run 30 AI-powered executions a day across a couple workflows. Manually checking usage means opening provider dashboards, filtering dates, and trying to match it back to what happened in n8n, which can easily burn about 10 minutes per investigation, and you’ll do it more than once a week. With this workflow attached: you pass the execution ID at the end of each run, wait maybe a minute for pricing lookup and calculations, and the row lands in Google Sheets automatically. That’s a clean log of roughly 200 runs a week without anyone babysitting it.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets for the running cost and token log.
- OpenAI (or another LLM provider) as the token-based API you want to track.
- Jina API key (optional) (get it from your Jina account) for live pricing in the HTTP pricing step.
Skill level: Intermediate. You’ll connect accounts, create a Sheet from a CSV header, and pass an execution ID into an Execute Workflow node.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A workflow calls this utility. Your main automation ends with an “Execute Workflow” node that triggers this template and sends the current execution ID.
Run metadata gets pulled in. Using that execution ID, n8n retrieves details about the run so the workflow can infer which AI models were used and what usage data is available.
Tokens and pricing are turned into a real cost. Code steps derive prompt and completion token counts, then an HTTP request can optionally fetch up-to-date pricing. Another code step summarizes per-model totals and calculates the final dollar amount.
The results land in Google Sheets. A new row is appended with timestamp, token breakdown, models used, tools used, total cost, and a JSON field you can use later for deeper analysis.
You can easily modify Google Sheets to Airtable (or a database) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
This workflow can be started manually or called by another workflow, and both paths converge into the same processing chain.
- Add and keep Manual Start Trigger to allow on-demand testing from the editor.
- Enable Workflow Call Trigger so this workflow can be executed from another workflow using the Execute Workflow node.
- Connect both Manual Start Trigger and Workflow Call Trigger to Fetch Run Metadata as shown in the execution flow.
Step 2: Connect Google Sheets
The workflow logs summarized AI model costs to a spreadsheet in Update Sheets Log.
- Open Update Sheets Log and select the target spreadsheet and worksheet where you want the cost summary written.
- Credential Required: Connect your Google Sheets credentials.
Step 3: Set Up Processing Nodes
These nodes derive usage data, normalize metrics, and compute per-model costs before writing to the sheet.
- Review Fetch Run Metadata to ensure it pulls the execution metadata you want to analyze.
- In Derive AI Usage, confirm the code calculates token usage or AI metrics based on the metadata.
- Use Assign Run Metrics to map normalized fields (like model name, prompt tokens, completion tokens) for downstream pricing.
- In Summarize Model Costs, verify the code aggregates per-model and total costs.
Step 4: Configure the Pricing Lookup and Output
The workflow fetches AI pricing, applies it to usage, and logs results to Sheets.
- Configure Retrieve AI Pricing with the endpoint that returns current model pricing (this node runs after Assign Run Metrics).
- Ensure Retrieve AI Pricing flows into Summarize Model Costs, then into Update Sheets Log per the execution flow.
Step 5: Test and Activate Your Workflow
Validate the full chain from metadata extraction through cost logging, then activate for production use.
- Click Execute Workflow using Manual Start Trigger to run a test execution.
- Confirm the workflow completes through Update Sheets Log and that a new row appears in your spreadsheet.
- When satisfied, toggle the workflow Active so it can be invoked via Workflow Call Trigger from other workflows.
Common Gotchas
- Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials screen and confirm the connected Google account still has edit access to the target spreadsheet.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Live pricing via HTTP can fail if the Jina API header is missing or blocked by rate limits. When that happens, fall back to static pricing in the code step so logging never stops.
Frequently Asked Questions
About 30 minutes if your Google Sheet and credentials are ready.
No. You will mostly connect accounts and paste in the provided Sheet headers. A tiny bit of comfort editing a value like an API header helps, but it’s not “developer-only.”
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which vary by model and usage.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s a smart upgrade. Add an If node after the cost calculation (right before the Google Sheets update) and check “Total Cost” against your threshold. If it’s over, send a Slack message with the execution ID, model, and token totals, then still write the row to Sheets. That way the alert is immediate, but your reporting stays complete.
Usually it’s expired Google authorization or the sheet changed permissions. Reconnect the Google Sheets credential in n8n, then confirm the spreadsheet is shared with the same Google account. Also check that the workflow is pointing at the correct spreadsheet and tab, because renamed tabs can break updates.
On n8n Cloud you’re mainly limited by your monthly executions plan, while self-hosting depends on your server. In practice, logging one row per execution scales well into thousands of runs per month, because Google Sheets appends are lightweight compared to the AI calls themselves.
Often, yes, because this workflow depends on grabbing n8n execution context and doing flexible code-based cost calculations. n8n also gives you self-hosting for unlimited executions and more control over how you enrich and store the data. Zapier or Make can work for basic “log a value to Sheets,” but you’ll hit walls when you need branching logic, model-level summaries, or provider pricing fallbacks. If you’re uncertain, Talk to an automation expert and you’ll get a straight recommendation.
Once this is in place, every AI run leaves a receipt in Google Sheets. You stop guessing, and you start managing.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.