Google Sheets + Drive: Meta Ad Library insights
Competitor ad research sounds simple until you’re juggling tabs, screenshotting creatives, and trying to remember why an ad “felt strong” three days later.
This hits performance marketers first, but agency strategists and founders running their own ads feel it too. With Meta ad insights automation, you stop collecting random examples and start building a clean, searchable library you can actually use.
This workflow pulls image ads from the Meta Ad Library, has AI analyze what’s happening inside each creative, saves the image to Google Drive, then logs standardized insights into Google Sheets. You’ll see exactly how it works, what you need, and what results to expect.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Google Sheets + Drive: Meta Ad Library insights
flowchart LR
subgraph sg0["When clicking ‘Test workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Test workflow’", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Settings", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Clean Prompt"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Scrape Meta Ad Library with .."]
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Pass relevant Fields", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Calculate Runtime in Days", pos: "b", h: 48 }
n6@{ icon: "mdi:cog", form: "rounded", label: "Sort by Reach or Days Running", pos: "b", h: 48 }
n7@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Filter only Image Ads", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Limit Images to Analyze", pos: "b", h: 48 }
n9@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n10@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Download Image"]
n12@{ icon: "mdi:robot", form: "rounded", label: "Analyze Image Contents", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Save Image to Google Drive", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Data"]
n15@{ icon: "mdi:database", form: "rounded", label: "Store Data in Google Sheets", pos: "b", h: 48 }
n1 --> n2
n14 --> n15
n2 --> n3
n11 --> n13
n11 --> n12
n9 -.-> n12
n4 --> n11
n7 --> n8
n12 --> n14
n8 --> n4
n10 -.-> n12
n5 --> n6
n13 --> n14
n6 --> n7
n3 --> n5
n0 --> n1
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n10,n12 ai
class n9 aiModel
class n7 decision
class n15 database
class n3,n11 api
class n2 code
classDef customIcon fill:none,stroke:none
class n2,n3,n11,n14 customIcon
The Problem: Competitor Ad Research Is Messy and Hard to Reuse
If you’ve ever built a competitor report from scratch, you know the pain isn’t “finding ads.” It’s everything after that. You pull up the Ad Library, open 20 promising creatives, then start the manual grind: screenshot, download, rename, paste into a doc, jot notes, and hope you can find it again next month. Meanwhile the team asks for patterns (offers, hooks, CTAs, angles), but your notes are inconsistent because you were rushing. The real cost is not just time. It’s lost learning, because good insights vanish into messy folders.
The friction compounds fast. Here’s where it usually breaks down.
- You collect lots of creatives, but your “notes” aren’t standardized, so comparing ads turns into guesswork.
- Images get saved with random filenames, which makes them painful to search or reference in future audits.
- Filtering by what matters (reach, how long it’s been running, image-only) becomes a manual scavenger hunt.
- When you scale past a handful of ads, errors creep in and the whole report starts to feel untrustworthy.
The Solution: Automatically Analyze Meta Ad Library Creatives and Log Them
This n8n workflow turns the Meta Ad Library into a structured research pipeline. You start by entering the Meta Ad Library URL (or page ID details) you want to analyze, then the workflow fetches ad data through an HTTP request. Next, it calculates how long ads have been running and can rank them by reach or duration so you’re not wasting time on low-signal examples. It filters to image ads, caps how many you want to review, then pulls each image for analysis. OpenAI (GPT-4o) reviews the creative and produces consistent fields like visual description, hook elements, main offer, call-to-action, and psychological triggers. Finally, the original image is archived to Google Drive and the full record is written to Google Sheets so you have a reusable competitor library.
The workflow starts with your parameters, then pulls ads and sorts them by the signals you care about. After that, it retrieves the image, runs AI analysis, and merges the analysis with the metadata. The last step writes one clean row per ad into Google Sheets, with the image stored in Drive for quick reference.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you want a weekly competitor check with 10 image ads worth studying (the workflow defaults to scraping up to 300, then analyzing 10). Manually, saving each image, naming it, and writing usable notes can easily take about 10 minutes per ad, so you’re at roughly 100 minutes before you even summarize anything. With this workflow, you spend about 10 minutes setting the URL and limits, then let it run while it pulls images, analyzes them, saves them to Drive, and writes the Sheet. You come back to a ready-to-filter table instead of a pile of half-finished notes.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets to store the structured ad insights.
- Google Drive for archiving original ad images.
- Apify account to scrape Meta Ad Library data reliably.
- OpenAI API key (get it from your OpenAI API dashboard)
Skill level: Intermediate. You’ll connect accounts, paste API keys, and adjust a few settings like folder IDs and sheet columns.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You start the run with a URL and a few limits. The workflow begins with a manual trigger, then “Configure Parameters” sets things like the Meta Ad Library URL, how many ads to scrape (default 300), and how many to analyze deeply (default 10).
Ad data gets cleaned, sorted, and narrowed down. A sanitize step keeps prompts consistent, then an HTTP request pulls the ad library results. Next it computes run duration, ranks by reach or by days running, and filters down to image ads only so the AI isn’t analyzing formats you don’t care about.
Each creative is downloaded and analyzed with AI. The workflow maps essential fields, retrieves the actual ad image, then sends that image into the OpenAI chat model. An AI agent and structured output parser turn a “creative impression” into consistent fields like hook, offer, CTA, and psychological triggers.
Everything is archived and logged for reuse. The original image gets saved to a Google Drive folder, the analysis is merged with the ad metadata, and the final dataset is written into Google Sheets as a structured row per ad.
You can easily modify the ranking and filters to focus on different competitor signals based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
This workflow starts manually, which is ideal for testing and ad-hoc runs.
- Add or open the Manual Start Trigger node.
- Keep default settings, then connect Manual Start Trigger to Configure Parameters.
Step 2: Connect the Data Sources and Storage Services
This workflow pulls ad data, retrieves images, and stores results in Google services.
- In Fetch Ad Library Data, configure the HTTP request details for your ad library endpoint.
- In Retrieve Ad Image, configure the HTTP request that fetches the image asset.
- Open Archive Image to Drive and set your destination folder and file options.
- Open Update Spreadsheet Records and configure the target spreadsheet, sheet, and update mode.
Credential Required: Connect your Google Drive credentials in Archive Image to Drive.
Credential Required: Connect your Google Sheets credentials in Update Spreadsheet Records.
⚠️ Common Pitfall: If you leave Google credentials unset, the workflow will fail when trying to archive images or write spreadsheet updates.
Step 3: Configure Core Processing and Filtering
These nodes clean parameters, calculate metrics, and narrow down the ads for review.
- In Configure Parameters, define the fields used by your downstream request and filtering logic.
- In Sanitize Prompt, add code to clean or normalize any query text used in Fetch Ad Library Data.
- In Compute Run Duration, calculate the duration metric you want to rank by.
- In Rank by Reach or Duration, set the sort field and direction based on your ranking strategy.
- In Filter Image Ads, define conditions that keep only image-based ads.
- In Cap Image Reviews, set the maximum number of ads to review per run.
- In Map Essential Fields, map only the fields needed for image retrieval and analysis.
Step 4: Set Up AI Image Analysis
This section analyzes each ad image and structures the results for storage.
- Open Inspect Image Content and define the agent prompt and tools for image analysis.
- Use OpenAI Chat Engine as the language model connected to Inspect Image Content.
- Attach Structured Result Parser to Inspect Image Content to standardize outputs.
Credential Required: Connect your OpenAI credentials in OpenAI Chat Engine.
OpenAI Chat Engine is connected as the language model for Inspect Image Content - ensure credentials are added to OpenAI Chat Engine, not the sub-node.
Structured Result Parser is an AI sub-node - add credentials to the parent node (OpenAI Chat Engine), not to Structured Result Parser.
Step 5: Configure Outputs and Merging
Archived images and AI analysis results are merged and sent to the spreadsheet.
- Ensure Retrieve Ad Image outputs to both Archive Image to Drive and Inspect Image Content in parallel.
- Verify Archive Image to Drive and Inspect Image Content both connect into Combine Data Streams.
- In Combine Data Streams, confirm the merge operation matches your expected join strategy.
- In Update Spreadsheet Records, map merged fields to columns for reporting.
Step 6: Test and Activate Your Workflow
Run a manual test to validate the end-to-end flow and then activate for regular use.
- Click Execute Workflow from Manual Start Trigger to run a test.
- Confirm that Archive Image to Drive creates a file and Update Spreadsheet Records writes a new row.
- Check that Inspect Image Content returns structured results through Structured Result Parser.
- Once successful, toggle the workflow to Active for production runs.
Common Gotchas
- Google Drive credentials can expire or need specific permissions. If things break, check the n8n Credentials panel and confirm the Drive scope can create files in your target folder.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your accounts and folders are ready.
No. You’ll paste credentials, choose a Drive folder, and map a few Google Sheets columns.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (often a few cents per analyzed ad image) and any Apify usage fees.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, but keep it intentional. You can raise the limit in the “Cap Image Reviews” node to analyze more creatives per run, and you can also adjust ranking in “Rank by Reach or Duration” so the extra volume stays high-signal. Common customizations include changing the fields extracted in “Inspect Image Content,” tightening the “Filter Image Ads” rules, and mapping extra columns in “Update Spreadsheet Records.” If you go big (like 50+ images in one run), expect longer processing and higher OpenAI costs.
Usually it’s expired Google credentials or the sheet was moved to a different Drive location. Reconnect Google Sheets in n8n, then confirm the spreadsheet ID and tab name still match what “Update Spreadsheet Records” is writing to. If only some rows fail, it can also be column mapping mismatches (new columns, renamed headers) or Google rate limits when you push a lot of rows at once.
Practically, most teams run it in batches of 10 to 50 analyzed ads at a time so the AI costs and runtime stay reasonable. On n8n Cloud, your limit is tied to monthly executions, while self-hosting has no execution cap (it mainly depends on your server and API rate limits). The workflow is designed to scrape up to 300 ads, then analyze a smaller capped subset.
Often, yes. This workflow has sorting, filtering, merging data streams, and structured AI outputs, which gets clumsy (and expensive) in many Zapier-style setups once you add branching logic. n8n also gives you the option to self-host, which matters if you’re running research weekly across multiple competitors. Zapier or Make can still be fine for a simple “new row → send Slack message” type of flow. If you’re not sure, Talk to an automation expert and we’ll sanity-check your use case.
Once this is running, your competitor research stops being a one-off scramble and becomes an asset you keep building on. Honestly, that’s the difference between “collecting ads” and learning from them.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.