Reddit to Google Sheets, sentiment insights you trust
You start with a simple question: “What are people saying on Reddit?” Then it turns into tabs, copy-paste, half-read comment threads, and a spreadsheet that’s already outdated by the time you share it.
Brand managers feel this when leadership asks for “the vibe” before a launch. Market research analysts and product folks get it too. This Reddit sentiment automation keeps a Google Sheet updated with categorized posts and comment sentiment, so you’re working with evidence, not guesses.
You’ll see how the workflow pulls Reddit data with Bright Data, runs Gemini sentiment per comment, and writes clean rows to Google Sheets you can actually trust.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: Reddit to Google Sheets, sentiment insights you trust
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get status"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Get data"]
n3@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Switch", pos: "b", h: 48 }
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>scrap reddit"]
n5@{ icon: "mdi:cog", form: "rounded", label: "Wait", pos: "b", h: 48 }
n6@{ icon: "mdi:robot", form: "rounded", label: "Text Classifier", pos: "b", h: 48 }
n7@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model", pos: "b", h: 48 }
n8@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields", pos: "b", h: 48 }
n9@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model1", pos: "b", h: 48 }
n10@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n11@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "format sentiment", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge"]
n14@{ icon: "mdi:database", form: "rounded", label: "Append Sentiments", pos: "b", h: 48 }
n15@{ icon: "mdi:robot", form: "rounded", label: "Sentiment Analysis per comment", pos: "b", h: 48 }
n16@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Filter", pos: "b", h: 48 }
n17@{ icon: "mdi:swap-vertical", form: "rounded", label: "No category", pos: "b", h: 48 }
n18@{ icon: "mdi:swap-vertical", form: "rounded", label: "edit for Sentiment analysis", pos: "b", h: 48 }
n5 --> n1
n13 --> n12
n16 --> n14
n3 --> n2
n3 --> n5
n2 --> n11
n10 --> n18
n1 --> n3
n8 --> n10
n17 --> n11
n4 --> n1
n11 --> n6
n6 --> n8
n6 --> n17
n12 --> n16
n14 --> n11
n7 -.-> n6
n9 -.-> n15
n18 --> n15
n15 --> n13
n0 --> n4
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n6,n15 ai
class n7,n9 aiModel
class n3,n16 decision
class n14 database
class n1,n2,n4 api
classDef customIcon fill:none,stroke:none
class n1,n2,n4,n13 customIcon
Why This Matters: Reddit sentiment is messy without a system
Reddit is brutally honest, which is exactly why it’s useful and exactly why it’s hard to summarize. During a tentpole moment like WWDC, a single thread can explode with jokes, hot takes, and real product feedback buried three screens down. Manually, you end up sampling a few comments, missing context, and accidentally over-weighting the loudest voices. Worse, you spend your time gathering data instead of interpreting it. By the time your notes hit Slack or a deck, the conversation has already moved on.
It adds up fast. Here’s where the workflow earns its keep.
- You lose hours hunting for relevant posts, then re-checking them because the thread changed overnight.
- Copying posts and comments into Sheets invites mistakes, and one missed column ruins sorting and filtering.
- Sentiment becomes a “gut feel” because reading 200 comments is not a repeatable method.
- Sharing insights is slow since teammates need your context to trust what you found.
What You’ll Build: Reddit sentiment analysis that lands in Google Sheets
This workflow automates sentiment analysis of Reddit posts related to Apple’s WWDC25 event (and you can swap the topic to anything). You manually launch it in n8n, which triggers a Bright Data scraping job that searches Reddit based on the parameters you set. The workflow then polls Bright Data until the scraping job is finished, pulls the dataset results, and processes each post record in batches. Next, it classifies the post text into useful topics, maps the fields you care about (title, link, category, and more), then splits the comment text into individual items. Each comment gets sent through Gemini sentiment analysis, the results are merged back into a clean format, low-signal entries can be filtered out, and the final rows are appended into a Google Sheet that stays current.
The workflow starts with scraping, then shifts into “make sense of it” mode with topic classification and comment-level sentiment. Finally, it formats everything into a spreadsheet-friendly structure and writes it to Google Sheets so your team can review, sort, and share without extra cleanup.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you’re tracking WWDC chatter across 20 relevant posts, and each post has about 30 comments worth reading. Manually, even a quick pass can take about 2 minutes per comment once you include scrolling, context switching, and copying notes, which is roughly 20 hours of attention. With this workflow, you spend about 10 minutes adjusting the scrape parameters and launching it, then wait while Bright Data runs and Gemini scores sentiment in the background. You typically end up with a Sheet you can scan in minutes, then use your time on the part that matters: what to do next.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data for scraping Reddit posts and comments.
- Google Sheets to store and share the sentiment log.
- Gemini API key (get it from Google AI Studio / Google AI credentials).
Skill level: Intermediate. You won’t code, but you will paste API keys, connect OAuth, and tweak a JSON body safely.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
You launch the workflow manually. This is ideal for research moments (like WWDC week) when you want control over when a fresh pull happens.
Bright Data starts a Reddit scraping job, then n8n checks the status. The workflow triggers the scrape via HTTP Request, waits a short interval, and polls until Bright Data reports the dataset is ready.
Posts are processed and categorized before sentiment even begins. n8n iterates through records, uses a text classifier to tag topics, then maps the fields you’ll want in the spreadsheet so everything stays consistent.
Comments are split and analyzed with Gemini sentiment per comment. Each comment becomes a small payload, Gemini returns a sentiment result, and n8n merges and formats everything into rows that are easy to filter.
Google Sheets gets updated. A filter can keep only “clear” sentiment entries, then the workflow appends results into your target sheet.
You can easily modify the Reddit search term to track a different event or brand based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow manually so you can validate the Bright Data scrape and downstream AI processing before scheduling.
- Add Manual Launch Trigger as the start node (no configuration required).
- Connect Manual Launch Trigger to Trigger Reddit Scrape.
Step 2: Connect Bright Data and Manage Job Status
Trigger the dataset scrape, poll for completion, and route based on job status.
- In Trigger Reddit Scrape, set URL to
https://api.brightdata.com/datasets/v3/triggerand Method toPOST. - In Trigger Reddit Scrape, set JSON Body to
[ { "keyword": "WWDC25", "date": "Past month", "num_of_posts": 100, "sort_by": "New" } ]. - In Trigger Reddit Scrape → Query Parameters, set dataset_id to
[YOUR_ID], include_errors totrue, type todiscover_new, and discover_by tokeyword. - In Trigger Reddit Scrape → Headers, set Authorization to
Bearer [CONFIGURE_YOUR_TOKEN]. - In Retrieve Job Status, set URL to
=https://api.brightdata.com/datasets/v3/progress/{{ $json.snapshot_id }}and add the same Authorization header. - In Status Router, add a rule for ready with Left Value
={{ $json.status }}equalsready, and a rule for running with Left Value={{ $json.status }}equalsrunning. - Configure Delay Interval with Amount set to
15and route the running path to it so it loops back to Retrieve Job Status.
ready and the workflow will keep looping through Delay Interval.Step 3: Fetch Results and Classify Post Topics
Once the dataset is ready, load the snapshot, iterate over records, and classify each post topic.
- In Fetch Dataset Results, set URL to
=https://api.brightdata.com/datasets/v3/snapshot/[YOUR_ID]and add Query Parameters with format set tojson. - In Fetch Dataset Results, add the Authorization header value
Bearer [CONFIGURE_YOUR_TOKEN]. - Use Iterate Records to split records into batches (defaults are fine if you want one batch at a time).
- In Classify Text Topics, set Input Text to
=Post title :{{ $json.title }} Post description : {{ $json.description || " "}}. - Ensure Gemini Chat Model A is connected as the language model for Classify Text Topics. Credential Required: Connect your googlePalmApi credentials.
- In Map Core Fields, map fields like post_id to
={{ $json.post_id }}, url to={{ $json.url }}, and comments to={{ $json.comments }}. - Keep Fallback Category as a safety path for unclassified records (it sets response to
No category).
Step 4: Analyze Comment Sentiment and Merge Results
Split comments, prepare the sentiment payload, run analysis, and consolidate results for output formatting.
- In Split Comments, set Field To Split Out to
comments. - In Prepare Comment Payload, map comment to
={{ $json.comment }}and title to={{ $('Map Core Fields').item.json.title }}. - In Analyze Comment Sentiment, set Input Text to
=title: {{ $('Iterate Records').first().json.title }} description: {{ $('Iterate Records').first().json.description }} Comments: {{ $json.comment || " "}}. - Ensure Gemini Chat Model B is connected as the language model for Analyze Comment Sentiment. Credential Required: Connect your googlePalmApi credentials.
- In Combine Streams, keep Number Inputs set to
6to collect all sentiment paths.
Step 5: Format, Filter, and Store Results in Google Sheets
Format the final record, filter out unclear sentiment, and append/update the sheet.
- In Format Sentiment Output, map sentimentAnalysis to
={{ $json.sentimentAnalysis.category }}, post_id to={{ $('Iterate Records').first().json.post_id }}, and comment to={{ $json.comment }}. - In Filter Clear Sentiments, set the condition to
={{ $json.sentimentAnalysis }}notContainsNot clear. - In Update Sentiment Sheet, set Operation to
appendOrUpdate. - In Update Sentiment Sheet, set Document ID to
[YOUR_ID]and Sheet Name to[YOUR_ID]. - In Update Sentiment Sheet, keep Matching Columns set to
urland Mapping Mode toautoMapInputData. - Credential Required: Connect your googleSheetsOAuth2Api credentials in Update Sentiment Sheet.
Step 6: Test & Activate Your Workflow
Run a manual test to verify the scrape, sentiment analysis, and sheet update before enabling in production.
- Click Execute Workflow and confirm Manual Launch Trigger fires and reaches Trigger Reddit Scrape.
- Wait for Status Router to route to Fetch Dataset Results (or loop through Delay Interval until ready).
- Verify that Format Sentiment Output includes values like sentimentAnalysis, comment, and post_id.
- Confirm that Update Sentiment Sheet writes or updates rows in your Google Sheet.
- When everything looks correct, toggle the workflow Active for production use.
Troubleshooting Tips
- Bright Data credentials can expire or need specific permissions. If things break, check the Authorization header in the “scrap reddit” and “get status” HTTP Request nodes first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Quick Answers
About 30 minutes if your keys and OAuth are ready.
No. You’ll connect accounts, paste API keys, and tweak a couple of fields like the Reddit search term.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data scraping costs and Gemini API usage, which depends on how many comments you analyze.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Update the JSON body in the “scrap reddit” HTTP Request node to change the search term, time window, or sort. Swap or refine the categories inside the “Text Classifier” node so you’re not lumping everything into generic buckets. If sentiment feels off, edit the system prompt used in “Analyze Comment Sentiment” (Gemini) so it understands your context, like “product feedback vs. event hype.” You can also tighten the “Filter Clear Sentiments” step to keep only strong signals for exec-facing reports.
Usually it’s a bad or missing API key in the Authorization header, or the key doesn’t have permission for the dataset you’re trying to run.
It depends more on your scraping limits and model usage than n8n itself. On n8n Cloud Starter, you can run a healthy number of executions each month for research workflows like this, and higher tiers handle more. If you self-host, you’re mostly limited by your server size and how fast you want to process comments. Practically, teams often run this on a few dozen posts at a time, then scale up once the Sheet format is dialed in.
For this workflow, n8n has a few advantages: more complex logic with unlimited branching at no extra cost, a self-hosting option for unlimited executions, and native AI-oriented nodes (like text classification and sentiment analysis) that are awkward to maintain elsewhere. Zapier or Make can still work if you only want a basic “new item to row in Sheets” flow. The sticking point is the middle: polling job status, splitting comments, merging results, and filtering. That’s where n8n is simply more comfortable. Talk to an automation expert if you’re not sure which fits.
Once this is running, your “Reddit sentiment” doc stops being a one-off scramble and becomes a living dataset. Honestly, that’s when the insights start getting good.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.