OpenAI to Google Sheets, track token costs per run
Your AI bills usually don’t blow up because of one big decision. It’s the tiny stuff. A prompt tweak here, a new tool call there, a retry loop you forgot about. Then the invoice hits and you are stuck guessing what changed.
OpenAI cost tracking hits Marketing Ops teams hard, honestly, because they run a lot of experiments. But agency owners billing clients and product teams shipping AI features feel it too. You need clean, per-run numbers you can trust, not a monthly total that’s impossible to explain.
This n8n workflow logs token usage into Google Sheets on every run, using execution metadata so you can see what actually happened. You’ll learn what it captures, what you need to connect, and how to make the spreadsheet useful for budgeting and client reporting.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: OpenAI to Google Sheets, track token costs per run
flowchart LR
subgraph sg0["Subworkflow Execution Flow"]
direction LR
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map Token Usage Details", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/n8n.svg' width='40' height='40' /></div><br/>Fetch Execution Details"]
n6@{ icon: "mdi:play-circle", form: "rounded", label: "Subworkflow Execution Trigger", pos: "b", h: 48 }
n7@{ icon: "mdi:swap-vertical", form: "rounded", label: "Expand Usage Records", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Aggregate Token Totals", pos: "b", h: 48 }
n9@{ icon: "mdi:database", form: "rounded", label: "Log Token Metrics to Sheet", pos: "b", h: 48 }
n7 --> n8
n5 --> n4
n4 --> n7
n6 --> n5
n8 --> n9
end
subgraph sg1["Manual Start Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Manual Start Trigger", pos: "b", h: 48 }
n1@{ icon: "mdi:robot", form: "rounded", label: "Intelligent Agent Runner", pos: "b", h: 48 }
n2@{ icon: "mdi:cog", form: "rounded", label: "Run Sub-Workflow (Configure ..", pos: "b", h: 48 }
n3@{ icon: "mdi:wrench", form: "rounded", label: "Deliberation Tool", pos: "b", h: 48 }
n11@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n3 -.-> n1
n11 -.-> n1
n1 --> n2
n0 --> n1
end
subgraph sg2["Flow 3"]
direction LR
n10@{ icon: "mdi:brain", form: "rounded", label: "Utility: Gemini Model", pos: "b", h: 48 }
end
subgraph sg3["Flow 4"]
direction LR
n12@{ icon: "mdi:brain", form: "rounded", label: "Utility: Anthropic Model", pos: "b", h: 48 }
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n6,n0 trigger
class n1 ai
class n11,n10,n12 aiModel
class n3 ai
class n9 database
classDef customIcon fill:none,stroke:none
class n5 customIcon
The Problem: AI costs are hard to explain per run
Token usage is one of those things that sounds simple until you try to track it. You run an agent, it calls tools, it retries a step, it fans out into branches, and suddenly the “cost” of that single request is spread across multiple LLM calls and hidden details. If you only look at provider dashboards, you can see totals, not the “why.” And if you are billing clients or trying to keep a model change from quietly doubling spend, totals are not enough. You need per-execution cost visibility, tied to the workflow run that caused it.
The friction compounds. Here’s where it breaks down.
- Agents often don’t return complete token usage for tool calls, so your “usage” numbers look suspiciously low.
- When prompts change, you feel the cost increase later, after you’ve already shipped the change.
- Without a per-run log, client billing turns into manual guesswork and awkward conversations.
- Retries and branching can multiply usage, and most teams only notice after the monthly bill arrives.
The Solution: Log token usage from execution metadata into Sheets
This workflow solves the “missing data” problem by pulling token usage from n8n’s own execution metadata after the agent finishes. The run starts with an AI Agent (OpenAI is included, with Gemini and Anthropic models also available), then it triggers a sub-workflow designed specifically to compute usage. That sub-workflow fetches the execution details, maps the usage records into a consistent structure, expands them into line items, and aggregates totals for the run. Finally, it writes the metrics into a Google Sheet that already contains formulas to estimate cost per execution. You end up with a clean audit trail: every run, its token totals, and an estimated dollar cost you can filter, chart, or bill from.
The flow is simple in practice. Run the agent, then call the “token usage” sub-workflow after all branches finish. The sub-workflow pulls execution details from your n8n instance, summarizes the totals, and logs one row into Google Sheets.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team runs an agent-driven workflow about 40 times a day (support replies, brief generation, content QA, the usual). Manually checking dashboards, exporting usage, and trying to match it back to runs can easily take 5 minutes per run, which is roughly 3 hours a day. With this workflow, logging is automatic once it’s set up. You trigger the run, the sub-workflow fetches execution details in the background, and Google Sheets gets a new row in about a minute.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets for logging token metrics and costs
- OpenAI to run the example agent (or Gemini/Anthropic)
- n8n API key (get it from your n8n user settings)
Skill level: Intermediate. You’ll connect credentials, paste an API key, and understand where to place the sub-workflow call so it runs after everything else.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
An agent run kicks things off. In the template, a manual trigger starts an AI Agent that can use OpenAI (and includes Gemini and Anthropic model nodes as alternatives).
A sub-workflow runs after the agent finishes. This part matters. Token usage is most reliable when you fetch it from the completed execution, so the workflow calls a separate “usage calculator” sub-workflow once the main work is done.
Execution metadata is pulled and cleaned up. The sub-workflow uses your n8n API key to fetch execution details, then maps token usage into structured fields, expands usage records, and summarizes totals.
Google Sheets becomes your ledger. The workflow logs token totals into a sheet that includes formulas to estimate cost per run, so you can filter by date, client, workflow, or model.
You can easily modify the sheet columns to include client IDs or campaign names based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow with a manual trigger so you can run tests while building the token tracking pipeline.
- Add the Manual Start Trigger node as the entry point.
- Connect Manual Start Trigger to Intelligent Agent Runner to follow the execution flow.
Step 2: Connect n8n API and Google Sheets
Set credentials for internal execution lookups and metrics logging.
- Open Fetch Execution Details and select Credential Required: Connect your n8nApi credentials.
- Open Log Token Metrics to Sheet and select Credential Required: Connect your googleSheetsOAuth2Api credentials.
- In Log Token Metrics to Sheet, set Document to your spreadsheet ID (replace
[YOUR_ID]), and set Sheet Name toExecutions(gid=0).
Step 3: Set Up Intelligent Agent Runner
Configure the AI agent that generates the test run and drives downstream usage tracking.
- In Intelligent Agent Runner, set Prompt Type to
define. - Set Text to
Help me test something and output the text "This is a test workflow" after calling the think tool twice. - Ensure Deliberation Tool is connected as the tool for Intelligent Agent Runner; tool credentials are managed on the parent agent, not the tool node.
- Open OpenAI Chat Model and select Credential Required: Connect your openAiApi credentials.
- Optional: If you plan to switch models, configure Utility: Gemini Model with Credential Required: Connect your googlePalmApi credentials and set Model Name to
models/gemini-2.5-flash. - Optional: Configure Utility: Anthropic Model with Credential Required: Connect your anthropicApi credentials and set Model to
claude-3-haiku-20240307.
Step 4: Configure the Sub-Workflow Execution
Trigger the sub-workflow and capture the execution ID for usage analytics.
- Open Run Sub-Workflow (Configure Required) and set Mode to
each. - Set Workflow ID to the sub-workflow you want to analyze (this field is currently empty and must be configured).
- Under Workflow Inputs, map execution_id to
{{ $execution.id }}. - Confirm the execution flow: Intelligent Agent Runner → Run Sub-Workflow (Configure Required) and Subworkflow Execution Trigger → Fetch Execution Details.
Step 5: Set Up Token Usage Mapping and Aggregation
Extract AI token usage from execution data and prepare totals per model and workflow.
- In Fetch Execution Details, set Resource to
execution, Operation toget, and Execution ID to{{ $json.execution_id }}. - In Map Token Usage Details, keep Include set to
selectedand Include Fields toworkflowData.id, workflowData.name. - In Map Token Usage Details, set execution_id to
{{ $('Subworkflow Execution Trigger').item.json.execution_id }}. - In Map Token Usage Details, set tokenUsage to the JMESPath extraction expression:
{{$jmespath( $json, "data.resultData.runData.*[] | [?data.ai_languageModel] | [].{model: data.ai_languageModel[0][0].json.response.generations[0][0].generationInfo.model_name || inputOverride.ai_languageModel[0][0].json.options.model_name || inputOverride.ai_languageModel[0][0].json.options.model, tokenUsage: data.ai_languageModel[0][0].json.tokenUsage || data.ai_languageModel[0][0].json.tokenUsageEstimate}" )}} - In Expand Usage Records, set Field To Split Out to
tokenUsageand keep Include asallOtherFields. - In Aggregate Token Totals, set Fields To Split By to
id, name, tokenUsage.model, execution_idand summarize tokenUsage.tokenUsage.promptTokens and tokenUsage.tokenUsage.completionTokens usingsum.
Step 6: Configure the Google Sheets Output
Append each aggregated usage record into your tracking sheet.
- In Log Token Metrics to Sheet, set Operation to
append. - Map the columns as follows: llm_model →
{{ $json.tokenUsage_model }}, timestamp →{{ $now.format('yyyy-MM-dd HH:mm:ss')}}, workflow_id →{{ $json.id }}, execution_id →{{ $json.execution_id }}, input tokens →{{ $json.sum_tokenUsage_tokenUsage_promptTokens }}, workflow_name →{{ $json.name }}, completion tokens →{{ $json.sum_tokenUsage_tokenUsage_completionTokens }}.
Step 7: Test and Activate Your Workflow
Run a manual execution to verify that token usage is captured and logged correctly.
- Click Execute Workflow to run from Manual Start Trigger.
- Confirm that Fetch Execution Details returns execution data and Aggregate Token Totals produces summed token counts.
- Check your Google Sheet to verify a new row is appended by Log Token Metrics to Sheet with the model name, prompt tokens, and completion tokens.
- Once validated, activate the workflow for production use.
Common Gotchas
- Google Sheets credentials can expire or need specific permissions. If things break, check the Google connection in n8n’s Credentials screen first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30–60 minutes if your credentials are ready.
No. You’ll connect accounts and paste an n8n API key. The rest is configuration and making sure the sub-workflow runs after your agent finishes.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in LLM API costs from OpenAI (or Gemini/Anthropic), which depend on your model and token volume.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Add a “client” field in the Map Token Usage Details (Set) step, then write it into Google Sheets alongside the execution ID and totals. Common tweaks include adding a project name, capturing the model used (OpenAI vs Gemini vs Anthropic), and tagging runs from Telegram with a client code.
Usually it’s expired Google credentials or missing access to the target spreadsheet. Reconnect the Google Sheets credential in n8n, then confirm the spreadsheet lives in the same Google account (or is shared with it). Also check that the sheet tab name matches what the node is writing to. If you recently copied the template, it’s easy to forget to point the node at your own file.
A lot.
For this specific job, n8n is usually the better fit because the key data lives in execution metadata and you often need branching, summarizing, and a sub-workflow to get accurate totals. Zapier and Make can log rows to Sheets, sure, but they’re not built around “inspect the run that just happened” in the same way. n8n also gives you a self-hosting option, which is useful if you run lots of executions and don’t want usage-based pricing surprises. If you only need a basic “log one value to Sheets” scenario, Zapier or Make may feel faster to set up. Talk to an automation expert if you want help choosing.
Once this is logging, cost stops being a surprise and starts being a metric you can manage. The workflow keeps the ledger clean, so you can focus on improving prompts, shipping features, and billing with confidence.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.