Reddit to Gmail, AI trend reports your team can use
Your “quick trend check” turns into a half-day. Tabs everywhere. Notes in three places. Then someone asks, “Can you send what you found?” and you’re rebuilding the same report again.
This Reddit trend reports automation hits content marketers first, but editors and small teams running growth feel it too. Instead of copy-pasting links into a doc, you get a clean Gmail report plus a searchable source log in Sheets.
You’ll see how this workflow collects trends from Reddit, YouTube, and X, filters the noise with AI, and emails a report your team can actually use.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: Reddit to Gmail, AI trend reports your team can use
flowchart LR
subgraph sg0["Form Flow"]
direction LR
n3@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Form Trigger"]
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Analysis Parameters", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/reddit.svg' width='40' height='40' /></div><br/>Reddit: Search Posts"]
n7@{ icon: "mdi:swap-vertical", form: "rounded", label: "Format Reddit Data", pos: "b", h: 48 }
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>YouTube: Search Videos"]
n9@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/x.dark.svg' width='40' height='40' /></div><br/>X: Search Tweets"]
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse Twitter Data"]
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Format YouTube Data", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge: All Sources"]
n14@{ icon: "mdi:robot", form: "rounded", label: "AI Pre-filtering", pos: "b", h: 48 }
n15@{ icon: "mdi:brain", form: "rounded", label: "Pre-filter Content", pos: "b", h: 48 }
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Parse AI Filter Results"]
n17@{ icon: "mdi:swap-horizontal", form: "rounded", label: "IF: Is Content Relevant", pos: "b", h: 48 }
n18["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Handle Filter Errors"]
n19@{ icon: "mdi:cog", form: "rounded", label: "Aggregate: Relevant Items", pos: "b", h: 48 }
n20@{ icon: "mdi:robot", form: "rounded", label: "AI Deep Analysis", pos: "b", h: 48 }
n21@{ icon: "mdi:brain", form: "rounded", label: "Deep Analysis", pos: "b", h: 48 }
n22["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Structure Analysis Result"]
n23@{ icon: "mdi:cog", form: "rounded", label: "Aggregate: Deep Analysis Res..", pos: "b", h: 48 }
n24@{ icon: "mdi:robot", form: "rounded", label: "AI: Synthesize Final Report", pos: "b", h: 48 }
n25@{ icon: "mdi:brain", form: "rounded", label: "Synthesis", pos: "b", h: 48 }
n26@{ icon: "mdi:swap-vertical", form: "rounded", label: "Format Report Payloads", pos: "b", h: 48 }
n27@{ icon: "mdi:swap-vertical", form: "rounded", label: "Splicing final report", pos: "b", h: 48 }
n28@{ icon: "mdi:message-outline", form: "rounded", label: "Send HTML Report", pos: "b", h: 48 }
n29["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Send Feishu Card"]
n30@{ icon: "mdi:database", form: "rounded", label: "Archive Data", pos: "b", h: 48 }
n9 --> n12
n25 -.-> n24
n4 --> n5
n21 -.-> n20
n3 --> n14
n20 --> n22
n14 --> n16
n10 --> n11
n7 --> n13
n13 --> n3
n11 --> n13
n15 -.-> n14
n5 --> n8
n5 --> n6
n5 --> n10
n12 --> n13
n6 --> n7
n27 --> n28
n26 --> n30
n26 --> n27
n26 --> n29
n8 --> n9
n17 --> n19
n17 --> n18
n16 --> n17
n19 --> n20
n22 --> n23
n24 --> n26
n23 --> n24
end
subgraph sg1["Flow 2"]
direction LR
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>ScrapingBee抓取x推文"]
end
subgraph sg2["Flow 3"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Apify抓取x推文"]
end
subgraph sg3["Flow 4"]
direction LR
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>twitterapi抓取x推文"]
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n4 trigger
class n14,n20,n24 ai
class n15,n21,n25 aiModel
class n17 decision
class n30 database
class n8,n29,n1,n0,n2 api
class n11,n16,n18,n22 code
classDef customIcon fill:none,stroke:none
class n4,n6,n8,n10,n11,n13,n16,n18,n22,n29,n1,n0,n2 customIcon
Why This Matters: Trend research becomes a time sink
Trend research sounds simple until you do it properly. You scan Reddit threads, check YouTube for what’s gaining traction, peek at X to see the takes, then try to turn all of that into something actionable. The painful part is not “finding links.” It’s deciding what matters, capturing context, and making it shareable so your team can move. Do it manually and you’ll keep losing the same hours every week, plus you’ll miss great topics because you ran out of patience.
And the friction compounds. Here’s where it breaks down.
- Copy-pasting posts, video URLs, and quotes into a doc is slow, and you still end up hunting for the original sources later.
- Your “analysis” becomes a gut feeling because you’re skimming too fast to summarize sentiment, arguments, and angles.
- Different people report trends differently, so your weekly brief is inconsistent and hard to compare over time.
- Even when you find something valuable, it sits in a tab pile instead of landing in the team’s inbox and your tracking sheet.
What You’ll Build: An AI trend report that emails itself
This workflow turns one keyword submission into a full, structured trend brief. A teammate drops a topic into a public form, and the automation pulls relevant, trending content from Reddit, YouTube, and X in parallel using the official APIs (plus optional retrieval utilities). Then it runs a layered AI pipeline: a fast “precheck” model screens out junk, a deeper model analyzes the best items for summaries and sentiment, and a final “strategist” model synthesizes everything into a polished HTML report. At the end, your team receives that report via Gmail, a short summary can be pushed to Feishu, and the key sources get archived to Google Sheets so you can build a repeatable research library.
The workflow starts with a form submission and spins up three parallel searches. AI filters first, analyzes second, then composes one final report that’s ready to forward, file, or turn into content briefs.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say your team researches 5 keywords a week. Manually, a “proper” pass is often about 2 hours per keyword once you’ve checked Reddit, YouTube, and X, pulled links, and written something coherent, which is roughly 10 hours weekly. With this workflow, submitting the keyword takes a minute or two, AI processing runs in the background, and you mostly spend about 10 minutes skimming the emailed report and picking next actions. That’s most of a workday back every week, without sacrificing depth.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Google Sheets for archiving sources and analysis outputs
- Gmail to send the final HTML report
- Google Cloud API key + OAuth2 (get it from Google Cloud Console)
- Reddit OAuth2 app (create it in your Reddit developer settings)
- YouTube Data API v3 key (enable API in Google Cloud Console)
- X (Twitter) developer app (get it from the X developer portal)
Skill level: Intermediate. You’ll mostly be connecting credentials and editing a few destinations (email, spreadsheet, webhooks).
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A keyword kicks everything off. Someone submits a keyword through the workflow’s public Form Trigger. That single input becomes the “research assignment” for the automation.
Sources are collected in parallel. n8n queries Reddit for posts, calls the YouTube API for videos, and searches X for relevant tweets. The workflow normalizes the data so posts, videos, and tweets all share a consistent structure before being merged.
AI filters first, then analyzes deeply. A lightweight model runs a fast relevance screen so you’re not paying for deep analysis on low-signal items. Only what passes gets summarized, scored for sentiment and arguments, and converted into structured output you can reuse.
A final HTML report is composed and delivered. The “strategist” stage synthesizes the strongest items into a readable report, then Gmail sends it to your chosen recipients. In the same run, the workflow archives the sources and key fields to Google Sheets (and can send a short Feishu message if you use it).
You can easily modify the sources (for example, remove X or add another feed) and change the report format to match your team’s briefing style. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Trigger
Set up the intake form that starts the topic discovery process and captures the keyword for analysis.
- Add the Form Intake Trigger node and open its settings.
- Set Form Title to
选题捕手. - Set Form Description to
Please enter the core keywords you want to analyze, then click Submit. - In Form Fields, create a required field labeled keyword with placeholder
e.g.AI. - Connect Form Intake Trigger to Configure Analysis Inputs.
Step 2: Connect Social and Video Data Sources
Configure the three parallel discovery sources for YouTube, Reddit, and X, then normalize the results into a single feed.
- Open Configure Analysis Inputs and set these assignments: keyword to
{{ $json.body.keyword || $json.query.keyword || 'AI' }}, search_date to{{ $now.toFormat('yyyy-MM-dd') }}, and analysis_id to{{ $now.toFormat('yyyyMMddHHmmss') }}_{{ ($json.body.keyword || $json.query.keyword || 'default').replace(' ', '_') }}. - Configure YouTube Video Search with URL
https://www.googleapis.com/youtube/v3/searchand set Query Parameters including q to{{ $('Configure Analysis Inputs').item.json.keyword }}and publishedAfter to{{ $now.minus({days: 7}).toISO() }}. Credential Required: Connect yourhttpQueryAuthcredentials. - Connect YouTube Video Search → Expand Video Items and set Field to Split Out to
items, then connect to Shape YouTube Records. - In Shape YouTube Records, map fields like content to
{{ $json.snippet.title + ' ' + $json.snippet.description }}and url to=https://www.youtube.com/watch?v={{ $json.id.videoId }}. - Configure Reddit Post Search with Keyword set to
{{ $('Configure Analysis Inputs').item.json.keyword }}and Location set toallReddit. Credential Required: Connect yourredditOAuth2Apicredentials. - Connect Reddit Post Search → Shape Reddit Records and map content to
{{ $json.title + ' ' + ($json.selftext || '') }}and engagement_score to{{ $json.ups }}. - Configure X Tweet Search with Search Text set to
{{ $json.keyword }}. Credential Required: Connect yourtwitterOAuth2Apicredentials. - Connect X Tweet Search → Normalize Tweet Data to standardize Twitter output.
- Connect Shape YouTube Records, Shape Reddit Records, and Normalize Tweet Data into Combine Source Feeds with Number Inputs set to
3. - Note the parallel branch: Configure Analysis Inputs outputs to both YouTube Video Search and Reddit Post Search and X Tweet Search in parallel.
⚠️ Common Pitfall: If Shape YouTube Records uses the videoId field, ensure the YouTube response includes video items; otherwise the URL will be empty.
Step 3: Set Up AI Precheck and Deep Analysis
Batch the combined feed, run AI precheck screening, filter relevant items, and perform deep analysis.
- Connect Combine Source Feeds → Iterate Batches and set Batch Size to
10. - Configure AI Precheck Screening with the provided prompt and ensure the keyword expression
{{ $('Configure Analysis Inputs').item.json.keyword }}and data payload{{ JSON.stringify($json.data) }}are present. - Open Gemini Precheck Model and select Model Name
models/gemini-2.0-flash. Credential Required: Connect yourgooglePalmApicredentials. This model is connected as the language model for AI Precheck Screening—ensure credentials are added to Gemini Precheck Model. - Connect AI Precheck Screening → Decode Precheck Output → Relevance Decision, then set the IF condition to leftValue
{{ $json.decision }}equalsYES. - Connect the TRUE output of Relevance Decision → Collect Relevant Items → AI In-Depth Analysis.
- Open Gemini Deep Model and set Model Name to
models/gemini-2.0-flash. Credential Required: Connect yourgooglePalmApicredentials. This model is connected as the language model for AI In-Depth Analysis—ensure credentials are added to Gemini Deep Model. - Connect AI In-Depth Analysis → Structure Analysis Output → Gather Deep Results to clean and aggregate AI results.
⚠️ Common Pitfall: The AI nodes return JSON strings. If your AI model outputs markdown, Decode Precheck Output and Structure Analysis Output rely on their cleanup logic—do not remove that code.
Step 4: Configure Final Report Composition and Outputs
Generate the HTML report, then send it to email, Feishu, and archive it to Sheets.
- Connect Gather Deep Results → AI Final Report Composer and keep the HTML-focused prompt in place.
- Open Gemini Report Model and set Model Name to
models/gemini-2.5-flash. Credential Required: Connect yourgooglePalmApicredentials. This model is connected as the language model for AI Final Report Composer—ensure credentials are added to Gemini Report Model. - Configure Prepare Report Payloads with report_title set to
{{ '【' + $('Configure Analysis Inputs').item.json.keyword + '】热点分析报告 (' + $('Configure Analysis Inputs').item.json.search_date + ')' }}, report_content to{{ $json.output }}, and analysis_summary to{{ '本次分析共合并了 ' + $('Combine Source Feeds').all().length + ' 条原始数据,筛选后深度分析了 ' + $('Collect Relevant Items').item.json.data.length + ' 条高价值内容。' }}. - Note the parallel branch: Prepare Report Payloads outputs to both Archive to Sheets and Assemble Final Report and Send Feishu Summary in parallel.
- Open Assemble Final Report and set final_report_text to
{{ $json.report_title + '\n\n**分析概要**:\n' + $json.analysis_summary + '\n\n**详细报告**:\n' + $json.report_content }}. - Configure Email HTML Report with Message
{{ $json.final_report_text }}and Subject{{ $('Prepare Report Payloads').item.json.report_title }}. Credential Required: Connect yourgmailOAuth2credentials. - Configure Archive to Sheets with Operation set to
appendand select the target spreadsheet and sheet. Credential Required: Connect yourgoogleSheetsOAuth2Apicredentials.
⚠️ Common Pitfall: If Archive to Sheets has no Document and Sheet selected, the append action will fail silently during test runs.
Step 5: Add Error Handling
Capture any failed or irrelevant precheck results and surface them for diagnostics.
- Connect the FALSE output of Relevance Decision to Precheck Error Handler to log failed checks.
- Keep the existing code in Precheck Error Handler that references
{{ $('Configure Analysis Inputs').item.json.analysis_id }}for traceability.
Step 6: Test and Activate Your Workflow
Run a manual test to verify end-to-end data retrieval, AI analysis, and report delivery, then enable the workflow for production.
- Click Test Workflow and submit the Form Intake Trigger form using a sample keyword like
AI. - Confirm that Combine Source Feeds receives records from all three sources and that Iterate Batches produces batched items.
- Verify that Relevance Decision sends YES items to Collect Relevant Items and the AI nodes return structured JSON.
- Check that Email HTML Report sends the formatted report and that Archive to Sheets appends a new row.
- Once confirmed, toggle the workflow to Active for ongoing usage.
Troubleshooting Tips
- Google (Gmail/Sheets/Gemini) credentials can expire or need specific permissions. If things break, check the n8n Credentials section and your Google Cloud OAuth consent/scope settings first.
- If you’re using Wait nodes or external processing, response times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early (tone, audience, and “what good looks like”) or you’ll be editing outputs forever.
Quick Answers
About 20–30 minutes if your API accounts are ready.
No coding required. You’ll mostly connect credentials and edit the recipients plus spreadsheet destination.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in API costs for AI and data sources (typically a few dollars a month at light usage).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. You can swap out the sources by editing the Reddit Post Search, YouTube Video Search, and X Tweet Search steps, then keep the same Combine Source Feeds and AI pipeline. Common tweaks include narrowing to specific subreddits, changing the YouTube query to a channel list, adjusting the AI “relevance” criteria in the screening stage, and rewriting the final report prompt to match your content format (newsletter, briefing doc, pitch list).
Usually it’s expired OAuth credentials or missing Google permissions. Reconnect Google in n8n Credentials, then confirm the target spreadsheet is shared with the correct Google account. If you recently changed the spreadsheet ID, double-check the Archive to Sheets node too. Rate limits can also show up if you run big batches; reducing batch size often stabilizes it.
On n8n Cloud you can run plenty of weekly reports on the Starter plan, and if you self-host there’s no execution limit beyond your server resources.
Often, yes, because this workflow isn’t a simple “send data from A to B.” You’re doing branching logic, batching, multi-stage AI, and a final synthesis step, and n8n is built for that kind of orchestration. Self-hosting also matters if you want to run lots of internal research without counting every task. Zapier and Make can work, but you may hit complexity ceilings fast once you add filtering, aggregation, and formatted HTML output. If you’re unsure, Talk to an automation expert and describe your exact brief format.
Once this is running, trend research stops being a heroic effort and becomes a repeatable system. The workflow handles the collecting and summarizing so your team can focus on picking the right topics.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.