OpenAI to Google Sheets, track LLM spend per client
You roll out “a quick AI feature,” and suddenly OpenAI charges feel… mysterious. A few weeks later, a client asks for a breakdown, finance asks for a ledger, and you’re stuck piecing together costs from scattered logs and guesses.
OpenAI spend tracking hits hardest when you’re serving multiple clients inside one automation. Agency owners feel it during invoicing. Product teams feel it when usage spikes overnight and nobody knows why.
This n8n workflow logs each LLM call into Google Sheets with client and request details, so you can bill accurately and spot waste early. You’ll see what it does, what you need, and how it behaves in the real world.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: OpenAI to Google Sheets, track LLM spend per client
flowchart LR
subgraph sg0["On form submission Flow"]
direction LR
n0@{ icon: "mdi:database", form: "rounded", label: "Client Usage Log", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Logging Attributes", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>On form submission"]
n3@{ icon: "mdi:robot", form: "rounded", label: "Custom LLM Subnode", pos: "b", h: 48 }
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Display JSON Document"]
n10@{ icon: "mdi:cog", form: "rounded", label: "Parse PDF Upload", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Extract Resume Data", pos: "b", h: 48 }
n0 -.-> n3
n10 --> n1
n3 -.-> n11
n1 --> n11
n2 --> n10
n11 --> n4
end
subgraph sg1["Every End of Month Flow"]
direction LR
n5@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Filter Last Month", pos: "b", h: 48 }
n6@{ icon: "mdi:database", form: "rounded", label: "Get Client Logs", pos: "b", h: 48 }
n7@{ icon: "mdi:cog", form: "rounded", label: "Calculate Totals", pos: "b", h: 48 }
n8@{ icon: "mdi:play-circle", form: "rounded", label: "Every End of Month", pos: "b", h: 48 }
n9@{ icon: "mdi:message-outline", form: "rounded", label: "Send Invoice", pos: "b", h: 48 }
n6 --> n5
n7 --> n9
n5 --> n7
n8 --> n6
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n2,n8 trigger
class n3,n11 ai
class n5 decision
class n0,n6 database
classDef customIcon fill:none,stroke:none
class n2,n4 customIcon
The Challenge: Tracking LLM Spend Per Client Without a Mess
LLM costs are “small” until they aren’t. One feature turns into three, then someone adds a new prompt, then retries start firing, then a client uploads bigger files than you expected. Now you have usage spread across execution logs, scattered prompts, and a monthly OpenAI bill that does not map cleanly to who used what. The worst part is the timing. You usually discover the problem after the money is already spent, and when you’re under pressure to justify it.
It adds up fast. Here’s where it breaks down in day-to-day ops.
- You can’t confidently allocate OpenAI costs per client, so invoicing becomes hand-wavy and awkward.
- Support and delivery teams lack a simple “spend ledger,” which means every cost question becomes a mini-investigation.
- Retries, larger inputs, and prompt changes quietly inflate usage, and you find out after the month closes.
- Even if you do track it, the data often lives in a place non-technical people never check.
The Fix: Log Every OpenAI Call Into Google Sheets
This workflow turns each LLM call into a clean row in Google Sheets, including which client it belonged to and what request triggered it. It starts when a user submits a request to your “AI service” (the template uses a resume PDF → structured JSON example). The file gets parsed, you attach a few business fields like client name or project ID, and then the OpenAI step runs with a custom LLM subnode built in a Langchain Code node. That custom piece captures token usage and cost metadata during the model’s lifecycle, then writes it into a Google Sheet so the record is permanent and easy to audit. Finally, you can aggregate the sheet for client-level totals and billing.
The workflow begins with a Webhook/form upload, then extracts data from the file and enriches it with client identifiers. OpenAI processes the content (in this case, an Information Extractor that formats the resume into a JSON schema). Google Sheets receives a usage log row you can total up later, without digging through n8n executions.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you process 30 resume-to-JSON requests a day across 6 clients. Without logging, you usually spend maybe 10 minutes per client at month-end just reconstructing what happened (about an hour), plus another hour sanity-checking invoices and answering questions. With this workflow, each request writes a row to Google Sheets automatically. Day-to-day effort is basically zero, and month-end becomes “filter by client, sum the cost,” which is usually a few minutes.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets for the usage ledger and reporting.
- OpenAI to run the LLM extraction step.
- OpenAI API key (get it from your OpenAI dashboard).
Skill level: Intermediate. You’ll be comfortable self-hosting n8n and updating a few nodes/fields, but you won’t need to build an app from scratch.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A request hits your Webhook/form upload. In the template, a user uploads a resume PDF to a mock “data conversion API,” which triggers the entire run.
The file is parsed and business context is added. n8n extracts text from the PDF, then an Edit Fields (Set) step captures extra variables you care about, like client name, customer ID, project, or internal cost center.
OpenAI structures the content into your schema. An Information Extractor organizes the resume into JSON. The key detail is the custom Langchain Code LLM subnode attached to that extractor, which captures usage metadata (tokens and cost) while the model runs.
Google Sheets receives the spend log. The workflow writes a row containing request details plus the usage metadata, so reporting becomes a simple filter-and-sum job.
You can easily modify the “client fields” to match your billing model based on your needs. See the full implementation guide below for customization options.
Watch Out For
- Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials panel and the sheet’s sharing settings first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About an hour if self-hosting is already set up.
Yes, but someone on your team should be comfortable with self-hosting n8n. Once it’s running, the day-to-day use is just checking a Google Sheet.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs, which vary based on tokens and model.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can keep the same “log usage metadata” idea and swap the input and output pieces around it. Many teams replace the resume PDF trigger with a product event webhook, a Telegram intake, or a form submission, then update the Edit Fields (Set) node to capture the client/project identifiers you bill against. If you don’t want Google Sheets, you can send the same metadata via an HTTP Request node to your CRM, database, or billing tool. The key is keeping the custom LLM subnode attached to a supported node like Information Extractor so the lifecycle hooks can capture tokens and cost.
Usually it’s expired credentials or the Google account doesn’t have access to the target spreadsheet. Reconnect the Google Sheets credential in n8n, then confirm the exact sheet and tab still exist and haven’t been renamed.
On self-hosted n8n there’s no execution cap, so capacity mostly depends on your server and OpenAI rate limits. In practice, teams run hundreds of logged calls a day without thinking about it, as long as the Google Sheet isn’t hitting its own limits and you’re not writing thousands of rows per hour.
For this specific use case, usually yes, because the template relies on a Langchain Code node that’s only available in self-hosted n8n and is designed to capture LLM lifecycle metadata. Zapier and Make can log rows to Google Sheets, but getting reliable token/cost metadata per call is the hard part. If you only need a basic “request happened” log, those tools are fine. If you need an audit-friendly ledger that ties costs to clients, n8n is a better fit. Talk to an automation expert if you want help choosing.
Once this is in place, LLM spend stops being a mystery and starts being a line item you can defend. Set it up, let it run, and use the clarity to bill cleanly and optimize with confidence.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.