YouTube to Google Sheets, searchable video summaries
You find a great YouTube channel, bookmark ten videos, then realize you’ll never remember which one covered “pricing,” “positioning,” or that one perfect framework. Tabs multiply. Notes scatter. A week later, you’re back to re-watching the same parts.
This hits marketers doing competitive research, but content ops folks and solo founders building a learning library feel it too. With YouTube Sheets summaries, you turn an entire channel into a searchable spreadsheet you can scan in minutes, not hours.
You’ll see how the workflow pulls a channel’s videos, grabs transcripts (English by default), summarizes them with an AI agent, and logs everything to Google Sheets with a clean structure you can actually search.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: YouTube to Google Sheets, searchable video summaries
flowchart LR
subgraph sg0["Parameters Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get_Scraper_Status"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Start_YouTube_Scraper"]
n2@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Check_Scraper_Status", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Retrieve_Scraped_Data"]
n4@{ icon: "mdi:robot", form: "rounded", label: "Content_Analysis_Agent", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "Output_Parser_Auto_Fix", pos: "b", h: 48 }
n6@{ icon: "mdi:robot", form: "rounded", label: "Structured_Output_Parser", pos: "b", h: 48 }
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Data_Processing_Script"]
n8@{ icon: "mdi:cog", form: "rounded", label: "Wait_15_Seconds", pos: "b", h: 48 }
n9@{ icon: "mdi:cog", form: "rounded", label: "Convert_To_File", pos: "b", h: 48 }
n10@{ icon: "mdi:cog", form: "rounded", label: "Upload_To_Google_Drive", pos: "b", h: 48 }
n11@{ icon: "mdi:cog", form: "rounded", label: "Extract_From_File", pos: "b", h: 48 }
n12@{ icon: "mdi:cog", form: "rounded", label: "Download_From_Google_Drive", pos: "b", h: 48 }
n13@{ icon: "mdi:swap-vertical", form: "rounded", label: "Process_Each_Video", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Select_Subtitle_Language"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Parameters"]
n16@{ icon: "mdi:message-outline", form: "rounded", label: "Send a message", pos: "b", h: 48 }
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>start from?"]
n18@{ icon: "mdi:cog", form: "rounded", label: "Wait_2_Seconds", pos: "b", h: 48 }
n19@{ icon: "mdi:database", form: "rounded", label: "Write_Contentdata_To_Sheet1", pos: "b", h: 48 }
n20@{ icon: "mdi:brain", form: "rounded", label: "DeepSeek Chat Model", pos: "b", h: 48 }
n21@{ icon: "mdi:swap-vertical", form: "rounded", label: "Extract_Alldata", pos: "b", h: 48 }
n22@{ icon: "mdi:database", form: "rounded", label: "Create_Sheet", pos: "b", h: 48 }
n15 --> n22
n17 --> n13
n22 --> n1
n18 --> n13
n9 --> n10
n21 --> n19
n8 --> n0
n11 --> n17
n0 --> n2
n13 --> n16
n13 --> n14
n20 -.-> n4
n20 -.-> n5
n2 --> n3
n2 --> n8
n3 --> n9
n1 --> n8
n4 --> n7
n7 --> n21
n5 -.-> n4
n10 --> n12
n14 --> n4
n6 -.-> n5
n12 --> n11
n19 --> n18
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n15 trigger
class n4,n5,n6 ai
class n20 aiModel
class n2 decision
class n19,n22 database
class n0,n1,n3 api
class n7,n14,n17 code
class n17 disabled
classDef customIcon fill:none,stroke:none
class n0,n1,n3,n7,n14,n15,n17 customIcon
The Problem: YouTube Research Doesn’t Stay Organized
Watching smart YouTubers is a legit way to learn, but it breaks down fast when you’re doing it for work. You watch a 22-minute video to find one segment on onboarding, then you can’t locate it later. Or you remember a creator said something sharp about “distribution,” but it’s buried somewhere across months of uploads. Even worse, different people on your team watch different videos and summarize them in totally different styles, which makes comparison basically impossible. The time cost is obvious, but the mental load is the real killer.
It’s not one big failure. It’s lots of small ones stacking up.
- You end up re-watching videos because your notes aren’t searchable or consistent.
- Important context gets lost when summaries live in Slack threads, Notion pages, and random docs.
- Channel-wide patterns stay invisible because you can’t scan a whole catalog at once.
- Manual tracking invites errors, like mismatched titles, missing URLs, and “which video was that?” confusion.
The Solution: Automatically Summarize a Channel Into Google Sheets
This workflow turns a YouTube channel into a structured research database. You start by submitting a few parameters in a simple form (channel URL, how many videos to pull, and a name for storage). n8n then launches an Apify YouTube scraper, waits while the scrape runs, and fetches the final dataset when it’s ready. From there, the workflow processes videos in batches, tries to select English subtitles when they exist, and passes each video’s transcript plus metadata into an AI summarization agent. Finally, every summarized record is appended into a dedicated Google Sheets tab, and you get a Gmail notification when the run finishes. It’s built for consistency, so scanning and comparing across dozens of videos feels straightforward instead of messy.
The workflow starts with a form trigger that creates a new tab in your Google Sheet for that channel. Next it scrapes, stores a raw backup file in Google Drive, then loops through videos in batches to generate summaries. When it’s done, your sheet is populated and your inbox gets the “all set” message.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you’re researching a creator with about 120 videos. Manually, even a “quick” workflow is ugly: maybe 5 minutes to open, skim, and note each one, which is about 10 hours of effort, plus the reality that your notes will vary day to day. With this automation, you spend about 5 minutes filling the form and walking away. The scrape and summarization run in the background, and when it’s done you have a full Google Sheet you can search and filter right away.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets to store summaries in a searchable table
- Google Drive for raw scrape backups and reprocessing
- Apify API token (get it from the Apify Console)
- DeepSeek API key (or other LLM) (get it from your LLM provider dashboard)
- Gmail to receive a completion notification
Skill level: Intermediate. You’ll connect accounts, paste API keys, and test a run with a small video count first.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You submit the channel details. A form trigger collects the YouTube channel URL, how many videos to crawl, a storage name, and where to send the completion email. That storage name is also used to create a dedicated tab in Google Sheets, which keeps each channel’s research separate.
The workflow scrapes and waits. n8n launches the Apify YouTube Scraper actor, then polls for status using HTTP requests with a short wait in between. Once the scrape is finished, the workflow pulls the dataset and converts the JSON into a file for safe keeping.
Transcripts are selected and summarized. Records are processed in batches, and a subtitle selection step tries to choose English subtitles when they’re available. Then an AI agent (using a chat model like DeepSeek) summarizes each video using the transcript plus metadata, and an auto-fixing parser helps keep the output structured.
Your spreadsheet gets populated. The workflow maps fields into a consistent schema and appends each row into Google Sheets, pausing briefly between writes so the sheet stays stable. When the last batch is complete, you receive a Gmail notification that the run is finished.
You can easily modify the subtitle language logic to support Mandarin (or any other language) based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Trigger
This workflow starts from a user-submitted form that collects the YouTube channel URL, scrape limits, storage name, Apify API key, and notification email.
- Add and open Input Parameters Form.
- Set Form Title to
={{ $workflow.name }}. - Set Form Description to
=Please complete all the parameters listed below.. - Confirm the form fields include Youtuber_MainPage_URL, Total_number_video, Storing_Name, Apify_API, and Email.
Step 2: Connect Google Sheets and Google Drive
These nodes create the sheet tab, store the scraped dataset as a file, then download and extract items for processing.
- Open Create Sheet Tab and set Operation to
create. - Set Title to
={{ $('Input Parameters Form').item.json.Storing_Name }}and choose your Document ID (currently[YOUR_ID]). - Credential Required: Connect your googleSheetsOAuth2Api credentials in Create Sheet Tab.
- Open Store File in Drive and set Name to
={{ $('Input Parameters Form').item.json.Storing_Name }}. - Credential Required: Connect your googleDriveOAuth2Api credentials in Store File in Drive and Download Drive File.
- In Download Drive File, keep Operation as
downloadand File ID as={{ $('Store File in Drive').item.json.id }}. - Confirm Extract Items from File is connected after the download.
Step 3: Configure the YouTube Scraper and Polling Loop
This section launches the Apify YouTube scraper, waits, checks job status, and pulls the dataset when complete.
- Open Launch YouTube Scraper and set URL to
=https://api.apify.com/v2/acts/streamers~youtube-scraper/runs?token={{ $('Input Parameters Form').item.json.Apify_API }}. - Keep Method as
POSTand JSON Body as the provided JSON with{{ $('Input Parameters Form').item.json.Total_number_video }}and{{ $('Input Parameters Form').item.json.Youtuber_MainPage_URL }}. - Ensure Delay 15 Seconds has Amount set to
15and connects to Fetch Scraper Status. - In Fetch Scraper Status, set URL to
=https://api.apify.com/v2/acts/streamers~youtube-scraper/runs/last?token={{ $('Input Parameters Form').item.json.Apify_API }}. - In Verify Scraper Result, keep the condition
={{ $json.data.status }}equalsSUCCEEDEDto route completed runs to Pull Scraped Dataset and unfinished runs back to Delay 15 Seconds. - In Pull Scraped Dataset, set URL to
=https://api.apify.com/v2/acts/streamers~youtube-scraper/runs/last/dataset/items?token={{ $('Input Parameters Form').item.json.Apify_API }}&format=json&status=SUCCEEDED. - Confirm Convert JSON to File follows the dataset pull and feeds into Store File in Drive.
⚠️ Common Pitfall: Resume From Index is disabled in the workflow JSON. Enable it only if you want to resume from a specific index; otherwise keep it disabled to process all items.
Step 4: Set Up the AI Summarization Pipeline
This segment selects subtitle language, summarizes the content, parses the structured output, and normalizes the data fields.
- Review Choose Subtitle Language to ensure it selects an English subtitle track and outputs
srtandsrtUrl. - Open Content Summary Agent and set Text to
=id:{{ $('Batch Video Iterator').item.json.VideoID }} title:{{ $('Batch Video Iterator').item.json['\uFEFFTitle'] }} subtitle:{{ $('Choose Subtitle Language').item.json.srt }} description:{{ $('Batch Video Iterator').item.json.Description }}. - Keep Has Output Parser enabled, and confirm Structured Response Parser and Auto Repair Parser are connected.
- Credential Required: Connect your deepSeekApi credentials in DeepSeek Chat Engine. This language model powers Content Summary Agent, Structured Response Parser, and Auto Repair Parser.
- In Structured Response Parser, keep the JSON schema example as provided to enforce the output format.
- Verify Normalize Summary Data transforms the output into
VideoID,Field, andSummary.
Tip: If the AI output fails schema validation, Auto Repair Parser will retry. Ensure the system prompt in Content Summary Agent remains unchanged for reliable structure.
Step 5: Map Fields and Write to Google Sheets
After each video is summarized, the data is mapped to a structured row and appended to your sheet.
- Open Map All Fields and verify the assignments use expressions like
={{ $('Batch Video Iterator').item.json.id }}and={{ $('Normalize Summary Data').item.json.Summary }}. - In Append Data to Sheet, set Operation to
appendOrUpdateand confirm Sheet Name is={{ $('Input Parameters Form').item.json.Storing_Name }}. - Confirm the column schema includes VideoID, Title, Duration, Date, URL, Likes, Description, Field, and Summary.
- Credential Required: Connect your googleSheetsOAuth2Api credentials in Append Data to Sheet.
- Keep Pause 2 Seconds set to
2to avoid rate limits before the next batch iteration.
Step 6: Configure Batching and Notifications
Videos are processed in batches, summarized, and a completion email is sent when batching ends.
- Confirm Batch Video Iterator receives items from Resume From Index and loops back from Pause 2 Seconds.
- Batch Video Iterator outputs to both Choose Subtitle Language and Send Completion Email in parallel.
- In Send Completion Email, set Send To to
={{ $('Input Parameters Form').item.json.Email }}. - Set Subject to
=🎉 {{ $workflow.name }} Done~and Message to= {{ $workflow.name }} finished at 🕓{{ new Date().toLocaleString("en-CA", { hour12: false }).replace(',', '') }}. - Credential Required: Connect your gmailOAuth2 credentials in Send Completion Email.
Step 7: Test and Activate Your Workflow
Run a controlled test to verify the scraper, summarizer, and storage components behave as expected.
- Click Execute Workflow and submit the Input Parameters Form with a known YouTube channel and a small Total_number_video value.
- Confirm Launch YouTube Scraper starts, then Fetch Scraper Status reaches a
SUCCEEDEDstatus and Pull Scraped Dataset returns items. - Verify that Append Data to Sheet adds rows to your target Google Sheet and that Send Completion Email sends the notification.
- Once verified, switch the workflow to Active to allow production runs.
Common Gotchas
- Google Sheets credentials can expire or be missing OAuth scopes. If things break, check the n8n credentials screen and the connected Google account permissions first.
- If you’re using Wait nodes or external scraping, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses while the Apify run is still finishing.
- Default prompts in AI nodes are generic. Add your summary format and brand voice early, otherwise you will be editing outputs forever.
Frequently Asked Questions
About 30 minutes if you already have your API keys.
No. You’ll mostly connect accounts, paste keys, and tweak a few fields in the form.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Apify usage and your LLM costs (DeepSeek or another model), which depend on how many videos you process.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, but plan to test it on a small batch first. You can adjust the “Choose Subtitle Language” logic to prioritize Mandarin, Spanish, or whatever you need, then keep the AI agent prompt consistent so your rows still compare cleanly. Common customizations include changing the summary format (bullets vs. paragraph), adding keywords/tags as extra columns, and sending a Slack message with run stats when it finishes.
Most of the time it’s an expired Google OAuth connection or the wrong Google account connected. Reconnect the Google Sheets credential in n8n, then confirm the target spreadsheet is selected in the “Create Sheet Tab” and “Append Data to Sheet” nodes. If it still fails, check sharing permissions on the sheet and confirm the workflow is not writing too fast, because Google can rate limit bursts.
A lot, as long as you size your scrape and run time realistically. On n8n Cloud, your limit is mostly monthly executions, and big channel runs can burn through them quickly because the workflow loops through many items. If you self-host, there’s no execution limit, but your server resources and the external APIs become the bottleneck. Practically, many teams start with about 50 videos to validate the sheet format, then scale up to a full channel run once they’re happy with the summaries.
Often, yes, because this isn’t a simple two-step zap. n8n handles looping, batching, waiting for external jobs (Apify), and structured AI outputs without turning your automation into a fragile chain of premium steps. The self-hosting option is also a big deal if you want to process large channels without worrying about per-task pricing. Zapier or Make can still be fine for lightweight “new video → send notification” use cases. If you want a second opinion before you invest time, Talk to an automation expert.
Once this is running, YouTube stops being a pile of “watch later” tabs and becomes a searchable research asset. Set it up once, then let the workflow do the catalog work for you.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.