OpenAI + Google Sheets: AI visibility audit tracking
“Are we showing up in AI answers?” is a simple question. Getting a reliable answer is the painful part, because you end up running the same prompt in multiple tools, copying responses into a sheet, and trying to make sense of it all later.
This AI visibility audit automation hits marketing managers first, but comms leads and agency operators feel it too. You will stop losing afternoons to manual checks and start building a clean, repeatable dataset you can actually report on.
This workflow sends one prompt to OpenAI and Perplexity (with an optional ChatGPT web scrape path), analyzes sentiment and brand ranking, then logs everything into Google Sheets. Below, you’ll see exactly how it runs and what outcomes to expect.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: OpenAI + Google Sheets: AI visibility audit tracking
flowchart LR
subgraph sg0["Manual Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Manual Trigger", pos: "b", h: 48 }
n6@{ icon: "mdi:database", form: "rounded", label: "Append row in sheet", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model1", pos: "b", h: 48 }
n8@{ icon: "mdi:robot", form: "rounded", label: "Response Sentiment Analyse3", pos: "b", h: 48 }
n9@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser3", pos: "b", h: 48 }
n10@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model (GPT 5)", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "OpenAI Anfrage", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "OpenAI", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>APIfy Call ChatGPT Scraper"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/perplexity.dark.svg' width='40' height='40' /></div><br/>Perplexity Request"]
n18@{ icon: "mdi:database", form: "rounded", label: "Read Prompts1", pos: "b", h: 48 }
n19@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over prompts", pos: "b", h: 48 }
n20@{ icon: "mdi:cog", form: "rounded", label: "before-loop-input", pos: "b", h: 48 }
n21@{ icon: "mdi:swap-vertical", form: "rounded", label: "manual input", pos: "b", h: 48 }
n22@{ icon: "mdi:cog", form: "rounded", label: "Limit for testing", pos: "b", h: 48 }
n23@{ icon: "mdi:swap-vertical", form: "rounded", label: "Perplexity Mapper", pos: "b", h: 48 }
n24@{ icon: "mdi:swap-vertical", form: "rounded", label: "ChatGPT Mapper", pos: "b", h: 48 }
n25@{ icon: "mdi:swap-vertical", form: "rounded", label: "Prepare Sheet Columns", pos: "b", h: 48 }
n26@{ icon: "mdi:cog", form: "rounded", label: "normalized-tool-response", pos: "b", h: 48 }
n27@{ icon: "mdi:cog", form: "rounded", label: "loop-input", pos: "b", h: 48 }
n28@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If sucessfull", pos: "b", h: 48 }
n29@{ icon: "mdi:cog", form: "rounded", label: "loop-end", pos: "b", h: 48 }
n12 --> n26
n29 --> n19
n27 --> n11
n27 --> n15
n27 --> n13
n21 --> n20
n28 --> n24
n28 --> n29
n18 --> n22
n24 --> n26
n0 --> n18
n11 --> n12
n22 --> n20
n19 --> n27
n23 --> n26
n20 --> n19
n7 -.-> n8
n15 --> n23
n6 --> n29
n25 --> n6
n26 --> n8
n10 -.-> n11
n9 -.-> n8
n13 --> n28
n8 --> n25
end
subgraph sg1["Response Sentiment A Flow"]
direction LR
n1@{ icon: "mdi:robot", form: "rounded", label: "Response Sentiment Analyse1", pos: "b", h: 48 }
n2@{ icon: "mdi:database", form: "rounded", label: "Sheet/Excel updaten", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "LLM-Prompts", pos: "b", h: 48 }
n4@{ icon: "mdi:brain", form: "rounded", label: "Chat Model", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "Output Parser", pos: "b", h: 48 }
n14@{ icon: "mdi:swap-vertical", form: "rounded", label: "final mapping", pos: "b", h: 48 }
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/perplexity.dark.svg' width='40' height='40' /></div><br/>Perplexity Request1"]
n17@{ icon: "mdi:swap-vertical", form: "rounded", label: "Map LLM Output", pos: "b", h: 48 }
n4 -.-> n1
n3 --> n16
n5 -.-> n1
n14 --> n2
n17 --> n1
n16 --> n17
n1 --> n14
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n8,n9,n11,n1,n5 ai
class n7,n10,n4 aiModel
class n28 decision
class n6,n18,n2 database
class n13 api
classDef customIcon fill:none,stroke:none
class n13,n15,n16 customIcon
The Problem: AI Visibility Tracking Is Too Manual
If you’ve tried to “audit” your brand in AI tools, you already know the trap. One person runs a few prompts in ChatGPT. Someone else checks Perplexity for citations. Then you paste fragments into a spreadsheet that was never designed for this. Next week you try again, but the prompt wording changed, the model changed, and now your “trend” is basically guesswork. Worse, leadership still wants an update, which means you spend your time assembling screenshots instead of learning anything.
The friction compounds. Here’s where it breaks down in real teams.
- Running the same prompt across tools takes about 10 minutes per prompt, and it’s shockingly easy to forget one source.
- Without a standard output format, you can’t compare answers week to week, so “brand visibility” stays a vibe instead of a metric.
- Sentiment and “who ranks above us” get debated in Slack because nobody logged structured fields.
- Citations vanish into pasted text, which makes it harder to prove where answers came from when you’re reporting.
The Solution: Automated Multi-Model Audit Logging
This workflow turns AI visibility checks into a repeatable process you can run whenever you want. It starts by pulling a list of prompts from a Google Sheet (or using a manual prompt input if you’re testing). For each prompt, it sends the same request to OpenAI for a baseline response and to Perplexity for an answer with sources. If you enable the optional path, it can also call an Apify actor to pull a ChatGPT web UI response, though that route comes with terms-of-service risk, so many teams skip it. Once responses come back, the workflow normalizes the fields so each model’s output is comparable, then runs an LLM-based analysis pass for sentiment and brand hierarchy (who’s mentioned first, second, third). Finally, it appends one clean row per model per prompt into your output Google Sheet, ready for weekly reporting.
The workflow kicks off from a manual trigger (easy to swap for a schedule). Prompts get batched, each prompt gets queried across tools, and responses get mapped into a consistent schema. Then the sentiment review agents classify tone and brand order, and the sheet is updated with structured columns plus sources.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you audit 20 prompts every Monday across two AI systems (OpenAI and Perplexity). Manually, plan on about 10 minutes per prompt to run it twice, copy answers, pull citations, and tag sentiment, which is roughly 3 hours total. With this workflow, you drop the 20 prompts into the input tab, hit run, and wait for processing; your hands-on time is closer to 10 minutes. You still review the sheet, but you’re reviewing structured rows, not rebuilding the dataset from scratch.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets for input prompts and audit logging
- OpenAI API to generate baseline model answers
- Perplexity API key (get it from your Perplexity API dashboard)
Skill level: Intermediate. You’ll connect credentials, create two sheet tabs, and validate a few field mappings.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A run starts on command (or on a schedule). The template uses a Manual Trigger so you can test safely, then swap to a webhook, Telegram trigger, or timed schedule once you trust the outputs.
Prompts come from a simple input sheet. n8n pulls your “Prompt” column from Google Sheets, optionally limits items for testing, then loops through prompts in batches so you’re not hammering APIs all at once.
Each prompt is checked across multiple AI systems. The workflow queries OpenAI for the baseline response and Perplexity for an answer with citations. If you enabled the optional Apify path, it attempts a ChatGPT web scrape and uses an If check to route around failures.
Responses get normalized, analyzed, and logged. “Set” mapping nodes assemble consistent fields (prompt, model name, response text, brand mentioned flag, brand hierarchy, polarity, emotion category, and sources). Then the row is appended to your output Google Sheet so your audit history is always up to date.
You can easily modify the prompt list format to include categories or funnel stages based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
This workflow starts manually so you can test prompts and output mapping before activating it.
- Add and open Manual Launch Trigger as the workflow entry point.
- Connect Manual Launch Trigger to Retrieve Prompt List.
Step 2: Connect Google Sheets
These nodes read prompt inputs and write analysis results back to your spreadsheets.
- Open Retrieve Prompt List and set Range to
A1:A100and sheetId to[YOUR_ID]. - Credential Required: Connect your
googleSheetsOAuth2Apicredentials in Retrieve Prompt List. - Open Update Spreadsheet Output and confirm Operation is
appendwith the desired sheetName and documentId values. - Credential Required: Connect your
googleSheetsOAuth2Apicredentials in Update Spreadsheet Output. - Open Append Row to Sheet and confirm Operation is
appendwith the correct sheetName and documentId. - Credential Required: Connect your
googleSheetsOAuth2Apicredentials in Append Row to Sheet.
Step 3: Prepare Prompt Inputs and Looping
This segment builds prompts, limits test items, and loops through the prompt list in batches.
- In Prompt Seed Builder, set Prompt to
Are Asics running shoes any goodfor the Perplexity pre-pass. - In Manual Prompt Entry, set Prompt to
Was sind die besten Laufschuhe?for manual testing. - Open Test Item Limit and set Max Items to
2to keep test runs short. - Verify the loop path is Retrieve Prompt List → Test Item Limit → Pre-Loop Pass → Iterate Prompt Batches → Loop Input Marker.
Step 4: Configure Parallel LLM/Tool Calls
Once each prompt hits the loop, three tools run in parallel to gather responses from different sources.
- Confirm that Loop Input Marker outputs to both OpenAI Query Chain, Perplexity Query Call, and Apify ChatGPT Scrape Call in parallel.
- In OpenAI Query Chain, set Text to
={{ $json.Prompt }}and keep promptType asdefine. - Open OpenAI Chat Model Prime and select model
gpt-5. - Credential Required: Connect your
openAiApicredentials in OpenAI Chat Model Prime. - Open Perplexity Query Call and set model to
sonarwith message content={{ $json.Prompt }}. - Credential Required: Connect your
perplexityApicredentials in Perplexity Query Call. - Open Apify ChatGPT Scrape Call and set URL to
https://api.apify.com/v2/acts/automation_nerd~chatgpt-prompt-actor/run-sync-get-dataset-itemswith JSON Body set to={ "prompts": [{{ JSON.stringify($json["Prompt"]) }}], "proxyCountry": "DE" }. - Credential Required: Connect your
httpQueryAuthcredentials in Apify ChatGPT Scrape Call.
Step 5: Map and Normalize Model Responses
These nodes standardize output fields across the different AI tools.
- In OpenAI Response Mapper, set Response to
={{ $json.text }}and LLM toOpenAI. - In Perplexity Response Map, map Response to
={{ $json.choices[0].message.content }}and citations toSource1throughSource6. - In ChatGPT Response Map, map Response to
={{ $json.response }}and citations toSource1throughSource6. - Ensure all three branches output to Normalize Tool Result before sentiment processing.
Step 6: Set Up Sentiment Review Agents and Output Parsers
Two sentiment agents score the responses and apply structured output parsing.
- In Sentiment Review Agent A, keep Text set to
=You task is to analyse the sentiment of a text message... "{{ $json.Message }}"and ensure hasOutputParser is enabled. - Connect Structured Output Reader to Sentiment Review Agent A and keep the JSON schema example for output.
- In Sentiment Review Agent B, keep Text set to
=Take this message and evaluate its content: "{{ $json.Response }}"with hasOutputParser enabled. - Connect Structured Output Reader B to Sentiment Review Agent B and keep the JSON schema example for output.
- Open Chat Model Core and set the model to
gpt-4.1-mini. - Credential Required: Connect your
openAiApicredentials in Chat Model Core. - Open OpenAI Chat Model B and set the model to
gpt-4.1-mini. - Credential Required: Connect your
openAiApicredentials in OpenAI Chat Model B.
Step 7: Map Final Fields and Write Results
This step prepares output fields for your sheets and stores the results.
- In Final Field Mapping, set fields like Message to
={{ $('Map LLM Response').item.json.Message }}and Prompt to={{ $('Prompt Seed Builder').item.json.Prompt }}. - In Assemble Sheet Fields, map Prompt to
={{ $('Loop Input Marker').item.json.Prompt }}and Response to={{ $('Normalize Tool Result').item.json.Response }}. - Keep the brand check expression in Assemble Sheet Fields set to
={{ $('Normalize Tool Result').item.json.Response.toLowerCase().includes("asics") }}. - Ensure Final Field Mapping outputs to Update Spreadsheet Output and Assemble Sheet Fields outputs to Append Row to Sheet.
Step 8: Test and Activate Your Workflow
Run a test to validate prompt flow, parallel tools, and sheet output before enabling production.
- Click Execute Workflow and confirm Retrieve Prompt List loads values from your sheet.
- Watch the parallel run after Loop Input Marker and verify OpenAI Query Chain, Perplexity Query Call, and Apify ChatGPT Scrape Call all return data.
- Confirm successful output mapping in Normalize Tool Result, then check sentiment results in Sentiment Review Agent B.
- Verify new rows appear in the sheets connected to Update Spreadsheet Output and Append Row to Sheet.
- When testing is complete, toggle the workflow to Active for production use.
Common Gotchas
- Google Sheets credentials can expire or need specific permissions. If things break, check the n8n Credentials screen and confirm the spreadsheet is shared with the connected Google account first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About an hour if your APIs and sheet are ready.
No. You’ll mostly connect accounts and paste API keys. The only “technical” part is matching your sheet columns to the workflow’s fields.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI and Perplexity API costs, which depend on how many prompts you run.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s one of the best tweaks. Replace the Manual Launch Trigger with a Schedule Trigger, then keep the “Retrieve Prompt List → Split in Batches → append row” structure the same. Common customizations include adding a “Prompt category” column, tracking extra models by duplicating the request-and-map path, and adding a simple visibility score field for reporting.
It’s usually expired OAuth, the wrong Google account, or a spreadsheet permission issue. Reconnect Google Sheets in n8n, then confirm the exact file and tab names match what the nodes expect. Also check if your sheet has headers like “Prompt” and the output columns; missing headers can make rows append in the wrong place. If it fails only sometimes, you may be hitting Google’s rate limits during large batches.
A lot.
Often, yes, because this workflow isn’t just “send prompt, store text.” You’re looping through many prompts, branching on success paths, normalizing outputs, and running structured LLM analysis, which gets clunky fast in simpler builders. n8n also lets you self-host, so you’re not paying per tiny step when you scale audits. Zapier or Make can still be fine for a lightweight two-model log with no sentiment, no hierarchy, and no batching. If you want help choosing, Talk to an automation expert.
Once you have an audit sheet that fills itself, “How are we doing in AI?” stops being a scramble. The workflow handles the repetitive logging so you can focus on what the answers are telling you.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.