🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

Reddit to Google Sheets, sentiment insights you trust

Lisa Granqvist Partner Workflow Automation Expert

You start with a simple question: “What are people saying on Reddit?” Then it turns into tabs, copy-paste, half-read comment threads, and a spreadsheet that’s already outdated by the time you share it.

Brand managers feel this when leadership asks for “the vibe” before a launch. Market research analysts and product folks get it too. This Reddit sentiment automation keeps a Google Sheet updated with categorized posts and comment sentiment, so you’re working with evidence, not guesses.

You’ll see how the workflow pulls Reddit data with Bright Data, runs Gemini sentiment per comment, and writes clean rows to Google Sheets you can actually trust.

How This Automation Works

Here’s the complete workflow you’ll be setting up:

n8n Workflow Template: Reddit to Google Sheets, sentiment insights you trust

Why This Matters: Reddit sentiment is messy without a system

Reddit is brutally honest, which is exactly why it’s useful and exactly why it’s hard to summarize. During a tentpole moment like WWDC, a single thread can explode with jokes, hot takes, and real product feedback buried three screens down. Manually, you end up sampling a few comments, missing context, and accidentally over-weighting the loudest voices. Worse, you spend your time gathering data instead of interpreting it. By the time your notes hit Slack or a deck, the conversation has already moved on.

It adds up fast. Here’s where the workflow earns its keep.

  • You lose hours hunting for relevant posts, then re-checking them because the thread changed overnight.
  • Copying posts and comments into Sheets invites mistakes, and one missed column ruins sorting and filtering.
  • Sentiment becomes a “gut feel” because reading 200 comments is not a repeatable method.
  • Sharing insights is slow since teammates need your context to trust what you found.

What You’ll Build: Reddit sentiment analysis that lands in Google Sheets

This workflow automates sentiment analysis of Reddit posts related to Apple’s WWDC25 event (and you can swap the topic to anything). You manually launch it in n8n, which triggers a Bright Data scraping job that searches Reddit based on the parameters you set. The workflow then polls Bright Data until the scraping job is finished, pulls the dataset results, and processes each post record in batches. Next, it classifies the post text into useful topics, maps the fields you care about (title, link, category, and more), then splits the comment text into individual items. Each comment gets sent through Gemini sentiment analysis, the results are merged back into a clean format, low-signal entries can be filtered out, and the final rows are appended into a Google Sheet that stays current.

The workflow starts with scraping, then shifts into “make sense of it” mode with topic classification and comment-level sentiment. Finally, it formats everything into a spreadsheet-friendly structure and writes it to Google Sheets so your team can review, sort, and share without extra cleanup.

What You’re Building

Expected Results

Say you’re tracking WWDC chatter across 20 relevant posts, and each post has about 30 comments worth reading. Manually, even a quick pass can take about 2 minutes per comment once you include scrolling, context switching, and copying notes, which is roughly 20 hours of attention. With this workflow, you spend about 10 minutes adjusting the scrape parameters and launching it, then wait while Bright Data runs and Gemini scores sentiment in the background. You typically end up with a Sheet you can scan in minutes, then use your time on the part that matters: what to do next.

Before You Start

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • Bright Data for scraping Reddit posts and comments.
  • Google Sheets to store and share the sentiment log.
  • Gemini API key (get it from Google AI Studio / Google AI credentials).

Skill level: Intermediate. You won’t code, but you will paste API keys, connect OAuth, and tweak a JSON body safely.

Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).

Step by Step

You launch the workflow manually. This is ideal for research moments (like WWDC week) when you want control over when a fresh pull happens.

Bright Data starts a Reddit scraping job, then n8n checks the status. The workflow triggers the scrape via HTTP Request, waits a short interval, and polls until Bright Data reports the dataset is ready.

Posts are processed and categorized before sentiment even begins. n8n iterates through records, uses a text classifier to tag topics, then maps the fields you’ll want in the spreadsheet so everything stays consistent.

Comments are split and analyzed with Gemini sentiment per comment. Each comment becomes a small payload, Gemini returns a sentiment result, and n8n merges and formats everything into rows that are easy to filter.

Google Sheets gets updated. A filter can keep only “clear” sentiment entries, then the workflow appends results into your target sheet.

You can easily modify the Reddit search term to track a different event or brand based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Manual Trigger

Start the workflow manually so you can validate the Bright Data scrape and downstream AI processing before scheduling.

  1. Add Manual Launch Trigger as the start node (no configuration required).
  2. Connect Manual Launch Trigger to Trigger Reddit Scrape.

Step 2: Connect Bright Data and Manage Job Status

Trigger the dataset scrape, poll for completion, and route based on job status.

  1. In Trigger Reddit Scrape, set URL to https://api.brightdata.com/datasets/v3/trigger and Method to POST.
  2. In Trigger Reddit Scrape, set JSON Body to [ { "keyword": "WWDC25", "date": "Past month", "num_of_posts": 100, "sort_by": "New" } ].
  3. In Trigger Reddit ScrapeQuery Parameters, set dataset_id to [YOUR_ID], include_errors to true, type to discover_new, and discover_by to keyword.
  4. In Trigger Reddit ScrapeHeaders, set Authorization to Bearer [CONFIGURE_YOUR_TOKEN].
  5. In Retrieve Job Status, set URL to =https://api.brightdata.com/datasets/v3/progress/{{ $json.snapshot_id }} and add the same Authorization header.
  6. In Status Router, add a rule for ready with Left Value ={{ $json.status }} equals ready, and a rule for running with Left Value ={{ $json.status }} equals running.
  7. Configure Delay Interval with Amount set to 15 and route the running path to it so it loops back to Retrieve Job Status.

⚠️ Common Pitfall: If the Bright Data token or dataset ID is incorrect, Retrieve Job Status will never reach ready and the workflow will keep looping through Delay Interval.

Step 3: Fetch Results and Classify Post Topics

Once the dataset is ready, load the snapshot, iterate over records, and classify each post topic.

  1. In Fetch Dataset Results, set URL to =https://api.brightdata.com/datasets/v3/snapshot/[YOUR_ID] and add Query Parameters with format set to json.
  2. In Fetch Dataset Results, add the Authorization header value Bearer [CONFIGURE_YOUR_TOKEN].
  3. Use Iterate Records to split records into batches (defaults are fine if you want one batch at a time).
  4. In Classify Text Topics, set Input Text to =Post title :{{ $json.title }} Post description : {{ $json.description || " "}}.
  5. Ensure Gemini Chat Model A is connected as the language model for Classify Text Topics. Credential Required: Connect your googlePalmApi credentials.
  6. In Map Core Fields, map fields like post_id to ={{ $json.post_id }}, url to ={{ $json.url }}, and comments to ={{ $json.comments }}.
  7. Keep Fallback Category as a safety path for unclassified records (it sets response to No category).

If classification accuracy seems low, adjust the category descriptions inside Classify Text Topics before moving to sentiment analysis.

Step 4: Analyze Comment Sentiment and Merge Results

Split comments, prepare the sentiment payload, run analysis, and consolidate results for output formatting.

  1. In Split Comments, set Field To Split Out to comments.
  2. In Prepare Comment Payload, map comment to ={{ $json.comment }} and title to ={{ $('Map Core Fields').item.json.title }}.
  3. In Analyze Comment Sentiment, set Input Text to =title: {{ $('Iterate Records').first().json.title }} description: {{ $('Iterate Records').first().json.description }} Comments: {{ $json.comment || " "}}.
  4. Ensure Gemini Chat Model B is connected as the language model for Analyze Comment Sentiment. Credential Required: Connect your googlePalmApi credentials.
  5. In Combine Streams, keep Number Inputs set to 6 to collect all sentiment paths.

⚠️ Common Pitfall: Do not add credentials to Classify Text Topics or Analyze Comment Sentiment directly—add them on Gemini Chat Model A and Gemini Chat Model B.

Step 5: Format, Filter, and Store Results in Google Sheets

Format the final record, filter out unclear sentiment, and append/update the sheet.

  1. In Format Sentiment Output, map sentimentAnalysis to ={{ $json.sentimentAnalysis.category }}, post_id to ={{ $('Iterate Records').first().json.post_id }}, and comment to ={{ $json.comment }}.
  2. In Filter Clear Sentiments, set the condition to ={{ $json.sentimentAnalysis }} notContains Not clear.
  3. In Update Sentiment Sheet, set Operation to appendOrUpdate.
  4. In Update Sentiment Sheet, set Document ID to [YOUR_ID] and Sheet Name to [YOUR_ID].
  5. In Update Sentiment Sheet, keep Matching Columns set to url and Mapping Mode to autoMapInputData.
  6. Credential Required: Connect your googleSheetsOAuth2Api credentials in Update Sentiment Sheet.

If records are not updating, confirm that the url column exists in the sheet and matches the workflow field naming exactly.

Step 6: Test & Activate Your Workflow

Run a manual test to verify the scrape, sentiment analysis, and sheet update before enabling in production.

  1. Click Execute Workflow and confirm Manual Launch Trigger fires and reaches Trigger Reddit Scrape.
  2. Wait for Status Router to route to Fetch Dataset Results (or loop through Delay Interval until ready).
  3. Verify that Format Sentiment Output includes values like sentimentAnalysis, comment, and post_id.
  4. Confirm that Update Sentiment Sheet writes or updates rows in your Google Sheet.
  5. When everything looks correct, toggle the workflow Active for production use.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Troubleshooting Tips

  • Bright Data credentials can expire or need specific permissions. If things break, check the Authorization header in the “scrap reddit” and “get status” HTTP Request nodes first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.

Quick Answers

What’s the setup time for this Reddit sentiment automation?

About 30 minutes if your keys and OAuth are ready.

Is coding required for this Reddit sentiment tracking?

No. You’ll connect accounts, paste API keys, and tweak a couple of fields like the Reddit search term.

Is n8n free to use for this Reddit sentiment automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data scraping costs and Gemini API usage, which depends on how many comments you analyze.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I modify this Reddit sentiment automation workflow for different use cases?

Yes, and you should. Update the JSON body in the “scrap reddit” HTTP Request node to change the search term, time window, or sort. Swap or refine the categories inside the “Text Classifier” node so you’re not lumping everything into generic buckets. If sentiment feels off, edit the system prompt used in “Analyze Comment Sentiment” (Gemini) so it understands your context, like “product feedback vs. event hype.” You can also tighten the “Filter Clear Sentiments” step to keep only strong signals for exec-facing reports.

Why is my Bright Data connection failing in this workflow?

Usually it’s a bad or missing API key in the Authorization header, or the key doesn’t have permission for the dataset you’re trying to run.

What volume can this Reddit sentiment automation workflow process?

It depends more on your scraping limits and model usage than n8n itself. On n8n Cloud Starter, you can run a healthy number of executions each month for research workflows like this, and higher tiers handle more. If you self-host, you’re mostly limited by your server size and how fast you want to process comments. Practically, teams often run this on a few dozen posts at a time, then scale up once the Sheet format is dialed in.

Is this Reddit sentiment automation better than using Zapier or Make?

For this workflow, n8n has a few advantages: more complex logic with unlimited branching at no extra cost, a self-hosting option for unlimited executions, and native AI-oriented nodes (like text classification and sentiment analysis) that are awkward to maintain elsewhere. Zapier or Make can still work if you only want a basic “new item to row in Sheets” flow. The sticking point is the middle: polling job status, splitting comments, merging results, and filtering. That’s where n8n is simply more comfortable. Talk to an automation expert if you’re not sure which fits.

Once this is running, your “Reddit sentiment” doc stops being a one-off scramble and becomes a living dataset. Honestly, that’s when the insights start getting good.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal