Google Sheets + Google Drive, ad images delivered fast
Launching new ad creatives shouldn’t feel like a scavenger hunt through spreadsheets, folders, and half-finished prompts. But when every product needs “just a few more variations,” the process turns into constant copy-paste, broken links, and review chaos.
This ad image automation hits performance marketers first. But agency owners and in-house creative ops feel the same drag when they’re trying to ship 20 to 50 variations fast without losing track of what’s approved.
This n8n workflow turns Google Sheets rows into UGC-style ad image prompts, generates the images, stores them in Google Drive, and writes clean links back to your Sheet. You’ll see how it works, what you need, and what to watch out for.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Google Sheets + Google Drive, ad images delivered fast
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n1@{ icon: "mdi:database", form: "rounded", label: "Get row(s) in sheet", pos: "b", h: 48 }
n2@{ icon: "mdi:robot", form: "rounded", label: "Analyze image", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "Download file", pos: "b", h: 48 }
n4@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n5@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n6@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n7@{ icon: "mdi:database", form: "rounded", label: "Append row in sheet", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Wait", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get image status"]
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Call Fal.ai API (nannoBanana)"]
n12@{ icon: "mdi:database", form: "rounded", label: "Update row in sheet", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Upload file", pos: "b", h: 48 }
n14@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields", pos: "b", h: 48 }
n15@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request"]
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get the image1"]
n9 --> n17
n9 --> n8
n8 --> n10
n4 --> n6
n6 --> n7
n14 --> n11
n13 --> n12
n16 --> n13
n2 --> n4
n3 --> n2
n17 --> n16
n10 --> n9
n5 -.-> n4
n7 --> n14
n1 --> n3
n15 -.-> n4
n11 --> n10
n0 --> n1
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n2,n4,n15 ai
class n5 aiModel
class n9 decision
class n1,n7,n12 database
class n10,n11,n16,n17 api
classDef customIcon fill:none,stroke:none
class n10,n11,n16,n17 customIcon
The Problem: Ad image variations are a tracking nightmare
If you’ve ever tried to scale UGC-style ad variations from a product catalog, you know the ugly part isn’t the “idea.” It’s the coordination. Product info lives in a sheet, the hero image lives in Drive, prompts live in someone’s doc, and outputs end up scattered across chats. Then somebody asks, “Which version did we ship?” and you’re spending an hour reconstructing the trail from memory. Multiply that by a handful of SKUs and a couple campaigns, and you’re suddenly managing a creative factory with duct tape.
The friction compounds. Here’s where things usually snap.
- You generate variations, but the prompts and outputs aren’t stored together, so review turns into guesswork.
- Drive links break or require access, which means your image tools can’t fetch the input files reliably.
- Manual status tracking (“queued,” “rendering,” “done”) gets stale fast, so teams duplicate work.
- One missing field in the spreadsheet forces you to pause, fix, rerun, and remember where you left off.
The Solution: Turn each sheet row into ready-to-review ad images
This workflow uses Google Sheets as your control panel and Google Drive as your creative library. It starts by pulling product rows (name, description, constraints, number of variations, aspect ratio, and more). Next, it fetches the product image from Drive, runs a quick image inspection with OpenAI (Vision/Chat), and generates structured prompts for UGC-style scenes using an “orchestrator” step that keeps the output consistent. Those prompts get appended into a separate ad_image sheet so you can review or iterate without touching the generator logic.
When you’re ready to render, the workflow sends each prompt to Fal.ai’s nano-banana model, polls the status until the job completes, downloads the finished image, uploads it to Google Drive, then updates the sheet with a clean output URL. Prompts, statuses, and links stay tied to the same record, so you can actually scale this without losing your mind.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you’re launching 10 SKUs and you want 5 UGC-style image variations per SKU. Manually, you’re typically doing about 10 minutes per variation between downloading the product image, rewriting prompts, running generation, uploading to Drive, and pasting links into a sheet, which is roughly 8 hours of tedious work. With this workflow, you fill in the row once, trigger the run, and then you’re mostly waiting on rendering. Realistically, you spend about 30 minutes checking prompts and approving outputs instead of babysitting every file.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets to store inputs, prompts, and status.
- Google Drive to fetch inputs and store outputs.
- Fal.ai API key (set env var FAL_KEY in n8n).
- OpenAI API access (use it for image analysis and prompt generation).
Skill level: Intermediate. You’ll connect OAuth credentials, set one environment variable, and confirm your Sheet columns match the workflow.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A manual run (or scheduled run) kicks things off. n8n pulls rows from your product sheet so you can control what gets processed by changing a status field or filtering rows.
The product image is fetched from Google Drive and “understood.” The workflow normalizes your Drive URL into a direct link, retrieves the file, then uses OpenAI Vision/Chat to extract useful details (what the product is, what stands out, what should not be changed).
Prompts get generated and organized before any rendering happens. An agent step assembles UGC-style scene prompts using your brand notes, constraints, target model style, aspect ratio, and number of variations. Those prompts are appended into the ad_image sheet, which keeps the review loop clean.
Fal.ai generates the images, then Drive becomes the source of truth. For each prompt, the workflow sends a render request, polls the status until the job completes, downloads the output, uploads it to Google Drive, and updates the corresponding sheet row with the final URL and status.
You can easily modify aspect ratio and prompt style based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow with a manual trigger so you can test prompt generation end-to-end before automating it.
- Add and keep Manual Start Trigger as the first node.
- Leave all fields in Manual Start Trigger at their defaults (no parameters required).
- (Optional) Keep Flowpast Branding as a reference sticky note for documentation.
Step 2: Connect Google Sheets Inputs
Pull product rows that need image prompt creation from Google Sheets.
- Open Retrieve Sheet Rows and set Document to
[YOUR_ID]and Sheet to[YOUR_ID]. - Set the filter in Retrieve Sheet Rows to match status equals
Create. - Credential Required: Connect your googleSheetsOAuth2Api credentials in Retrieve Sheet Rows.
product_image_url, product_name, product_description, num_variations, and model_target.Step 3: Set Up Image Retrieval and AI Analysis
Download the product image and analyze it to generate structured ad insights and prompt plans.
- In Fetch Drive File, set Operation to
downloadand File ID to={{ $json.product_image_url }}. - Credential Required: Connect your googleDriveOAuth2Api credentials in Fetch Drive File.
- In Inspect Image Content, keep Resource as
image, Input Type asbase64, and Operation asanalyze. - Set the Text prompt in Inspect Image Content to the provided JSON-guided analysis prompt (as-is).
- Credential Required: Connect your openAiApi credentials in Inspect Image Content.
- Open Ad Prompt Orchestrator and keep the Text field set to the long JSON instruction that references
{{ $('Retrieve Sheet Rows').item.json.* }}values. - Ensure Ad Prompt Orchestrator has Prompt Type set to
defineand Has Output Parser enabled. - Verify OpenAI Chat Engine is connected as the language model for Ad Prompt Orchestrator; Credential Required: Connect your openAiApi credentials in OpenAI Chat Engine.
- Confirm Structured JSON Parser is connected as the output parser for Ad Prompt Orchestrator; add credentials on the parent node (OpenAI Chat Engine), not the parser.
Step 4: Create Scene Records and Map Product Fields
Split the AI JSON into per-scene items, log them in Sheets, and map image URLs for rendering.
- In Explode Scene List, set Field to Split Out to
output.scenes. - In Append Sheet Record, set Operation to
appendand map columns: prompt to={{ $json.prompt }}, status toReady, scene_ref to={{ $('Retrieve Sheet Rows').item.json.product_name }}_{{ $json.scene_id }}, and product_name to={{ $('Retrieve Sheet Rows').item.json.product_name }}. - Credential Required: Connect your googleSheetsOAuth2Api credentials in Append Sheet Record.
- In Map Product Fields, keep Include Other Fields set to
trueand add the product field expression exactly as provided to convert Drive links into a direct image URL.
product expression in Map Product Fields to extract the correct file ID.Step 5: Render Images via Fal and Poll for Completion
Send prompts to the rendering API, poll for completion, and fetch the final image URL.
- In Invoke Fal Image Edit, set URL to
=https://queue.fal.run/fal-ai/{{ $('Retrieve Sheet Rows').item.json.model_target }}/edit, Method to=POST, and JSON Body to the provided payload with{{ $json.prompt }}and{{ $json.product }}. - Credential Required: Connect your httpHeaderAuth credentials in Invoke Fal Image Edit.
- In Request Image Status, set URL to
={{ $json.status_url }}and keep Authentication asgenericCredentialType. - Credential Required: Connect your httpHeaderAuth credentials in Request Image Status.
- In Conditional Status Check, set the condition to Left Value
={{ $json.status }}equals Right Value=COMPLETED. - In Delay Cycle, set Amount to
10to pause before re-checking status. - In Retrieve Render Result, set URL to
=https://queue.fal.run/fal-ai/nano-banana/requests/{{ $json.request_id }}and ensure retry is enabled. - Credential Required: Connect your httpHeaderAuth credentials in Retrieve Render Result.
- In External Image Fetch, set URL to
={{ $json.images[0].url }}and keep Authentication asgenericCredentialType. - Credential Required: Connect your httpHeaderAuth credentials in External Image Fetch.
COMPLETED, the loop will keep cycling through Delay Cycle. Verify the Fal API response structure and status values.Step 6: Save Rendered Images and Update the Sheet
Upload the rendered file to Google Drive and write the output URL back to the tracking sheet.
- In Upload to Drive, set Name to
={{ $('Retrieve Sheet Rows').item.json.product_name }}_{{ $('Append Sheet Record').item.json.scene_ref }}.jpeg, Folder to[YOUR_ID], and Input Data Field Name to=data. - Credential Required: Connect your googleDriveOAuth2Api credentials in Upload to Drive.
- In Update Sheet Record, set Operation to
updateand map scene_ref to={{ $('Append Sheet Record').item.json.scene_ref }}, status toComplete, and output_url to={{ $json.webViewLink }}. - Credential Required: Connect your googleSheetsOAuth2Api credentials in Update Sheet Record.
Step 7: Test and Activate Your Workflow
Run a manual test, confirm output integrity, then activate for production runs.
- Click Execute Workflow on Manual Start Trigger to run the full flow.
- Confirm that Append Sheet Record creates rows with
Readystatus and prompt text. - Verify Upload to Drive creates image files and Update Sheet Record updates
output_urlandCompletestatus. - Once verified, toggle the workflow to Active so it can be triggered on demand or swapped to a scheduled trigger later.
Common Gotchas
- Google Drive permissions matter more than people expect. If Fal.ai can’t fetch your input image, make sure the Drive file is set to “Anyone with link → Viewer” and confirm the workflow is using the converted direct link.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Fal.ai credentials can fail quietly if the header auth is wrong. Check that your Authorization header uses your FAL_KEY environment variable and confirm the status_url is returning a valid job state.
Frequently Asked Questions
About an hour if your Google credentials are ready.
No. You’ll mostly connect accounts and match your Sheet columns to the workflow.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI and Fal.ai API usage costs, which depend on how many images you generate.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s one of the best reasons to use this workflow. Update the product row fields like aspect_ratio, constraints, and brand_notes to steer outputs without rewriting prompts by hand. If you want deeper control, adjust the instructions in the Ad Prompt Orchestrator so the generated scene list matches your creative strategy (luxury, cozy, techy, or whatever you’re testing). You can also swap the Fal.ai model in the image generation request if you want faster jobs or a different look.
Usually it’s OAuth permissions or the file isn’t actually accessible to the workflow. Reconnect Google Drive in n8n, confirm the account has access to the folder, then verify your input images are shared as “Anyone with link → Viewer” so external services can fetch them. Also double-check that the link is being converted into a direct uc?export=view URL before the image is sent for analysis or generation.
A lot, as long as you pace it.
For this workflow, n8n is usually the better fit because you need polling loops (check job status, wait, retry), structured AI output parsing, and more control over how records are written back to Sheets. Those patterns are possible in Zapier or Make, but they get fiddly and can get expensive once you’re looping over lots of variations. n8n also gives you the self-hosting option if you want to run high volume without counting every task. If you only need a simple “new row → create one image → upload” flow, Zapier or Make can be fine. Talk to an automation expert if you want help picking the right approach for your creative volume.
Once this is running, your sheet becomes the command center and Drive becomes the library. The workflow handles the repetitive parts, and you get to focus on what actually moves performance.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.