🔓 Unlock all 10,000+ workflows & prompts free Join Newsletter →
✅ Full access unlocked — explore all 10,000 AI workflow and prompt templates Browse Templates →
Home n8n Workflow
January 22, 2026

Reddit + Shotstack: ready to post vertical videos

Lisa Granqvist Partner Workflow Automation Expert

You find a great Reddit thread. Then you lose an hour turning it into something you can actually post. Script, voiceover, subtitles, footage, edits, render. By the time it’s done, the moment is gone.

This Reddit video automation hits content creators first, honestly. But social media managers and editors feel it too, because the “quick repurpose” task never stays quick. This workflow turns a single Reddit link into a vertical video you can publish without living in a timeline.

Below you’ll see exactly what the workflow does, the results you can expect, and what you need to run it in n8n.

How This Automation Works

The full n8n workflow, from trigger to final output:

n8n Workflow Template: Reddit + Shotstack: ready to post vertical videos

The Problem: Reddit-to-video takes way too long

Turning a Reddit post into a decent vertical video is deceptively heavy. You have to read the thread, decide what to include, write a script that fits a short runtime, generate narration, build subtitles that actually sync, then hunt for B-roll that doesn’t feel random. And after that, you still have to assemble everything, render it, and fix the inevitable “caption timing is off” issue. One video can easily swallow an afternoon, which means you either post less or you post lower-quality content.

It adds up fast. Here’s where it usually breaks down in real life:

  • You end up copy-pasting chunks of text into three different tools just to get a usable script.
  • Subtitle timing becomes a mini editing project, especially when the voiceover pace changes.
  • B-roll searching is a rabbit hole, and “good enough” footage still takes time to collect.
  • Rendering and export settings get repeated from scratch, which invites mistakes and re-renders.

The Solution: Reddit thread in, vertical video link out

This workflow automates the full pipeline of turning a Reddit thread into a short vertical video. It starts with a webhook (or a manual run) where you pass in the Reddit link plus a few choices like voice and video length. n8n fetches the thread via the Reddit API, trims and structures the text, then uses OpenAI to summarize and split it into clip-sized beats that fit a short format. For each beat, it generates search queries, pulls matching vertical B-roll from Pexels, creates TTS narration, and builds subtitles that align to the audio. Finally, it uploads media to Shotstack, triggers a render (720×1280), waits for completion, and returns a clean video URL you can post.

The workflow kicks off when you submit a Reddit link and video settings. OpenAI turns the thread into a storyboard of clips, then Pexels, TTS, subtitles, and Shotstack assemble it into a finished vertical video. The output is a single URL, ready for TikTok, Shorts, or Reels.

What You Get: Automation vs. Results

Example: What This Looks Like

Say you turn 5 Reddit threads into videos each week. Manually, a “simple” version is still about 2 hours per video (script, TTS, captions, B-roll, timeline, render), so that’s roughly 10 hours weekly. With this workflow, you spend about 5 minutes submitting the Reddit link and settings, then wait for Shotstack to render. Even if processing takes around 20 minutes per video, your hands-on time drops to under an hour for the week.

What You’ll Need

  • n8n instance (try n8n Cloud free)
  • Self-hosting option if you prefer (Hostinger works well)
  • Reddit API access to fetch thread and comments
  • OpenAI API for summarization, clip structure, and TTS
  • Pexels API key (get it from your Pexels developer dashboard)
  • Shotstack API key (get it from the Shotstack dashboard)

Skill level: Intermediate. You’ll paste API keys, test a webhook payload, and tweak prompts or styling if you want a specific format.

Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).

How It Works

You trigger it with a Reddit link and settings. The workflow starts from an incoming webhook (or a manual run) and expects fields like redditLink, videoLength, voice, and ttsSpeed.

Reddit content gets fetched and cleaned up. n8n requests a Reddit token, converts the public link into an API URL, pulls the thread JSON, then trims long comment sections so the story fits your target length.

OpenAI turns the thread into clips, narration, and caption text. It generates a structured set of clip items, then produces voice audio per clip and prepares subtitle content that matches what’s being spoken.

Pexels and Shotstack assemble the actual video. Pexels searches run per clip, the workflow selects and merges a small set of relevant vertical videos, uploads audio and footage to Shotstack, builds a timeline, waits for the render, and checks status until the final URL is ready.

You can easily modify the OpenAI prompts to match your channel style, or swap Pexels for another stock media source based on your needs. See the full implementation guide below for customization options.

Step-by-Step Implementation Guide

Step 1: Configure the Webhook Trigger

Set up the inbound trigger so external requests can start the workflow and fan out into parallel branches.

  1. Add and open Incoming Webhook Trigger.
  2. Copy the Webhook URL and ensure your caller sends a JSON body with keys like voice, ttsSpeed, redditLink, and videoLength.
  3. Confirm that Incoming Webhook Trigger outputs to Split TTS Options, Fetch Reddit Token, Split Video Duration, and Split Reddit Link in parallel.
For quick local testing, you can also use Manual Execution Start, which fans out to the same four nodes in parallel.

Step 2: Configure Reddit Ingestion and Text Preparation

Wire the Reddit API flow to fetch the thread, normalize the URL, and limit the text before AI processing.

  1. Ensure Split Reddit Link feeds Convert to Reddit API URL, which then goes into Join Token and URL.
  2. Confirm Fetch Reddit Token merges into Join Token and URL and then calls Retrieve Reddit Thread.
  3. Route Retrieve Reddit Thread into Merge Length with Reddit JSON, along with Split Video Duration.
  4. Make sure Merge Length with Reddit JSON outputs to Limit Comment Length before the AI step.

Step 3: Set Up AI Text and Voice Generation

Generate the script from Reddit content, split into clip items, and synthesize voice audio.

  1. Connect Limit Comment Length to AI Text Generator and confirm it outputs into Divide Clip Items.
  2. From Divide Clip Items, verify the parallel outputs to Extract Text Content, Merge Clips with TTS, and Pexels Search Call.
  3. Ensure Merge Clips with TTS sends data to Custom Transform Script, then into Generate Voice Audio.
  4. Confirm Generate Voice Audio outputs to both Merge Uploads and TTS and Fetch Upload Links in parallel.
⚠️ Common Pitfall: If voice output seems empty, verify that the Reddit text is correctly trimmed in Limit Comment Length and that Custom Transform Script still passes the text field expected by Generate Voice Audio.

Step 4: Configure Video Search and Media Assembly

Search for footage, select videos, and combine them with audio and subtitles into a single timeline.

  1. Confirm Pexels Search Call feeds into Pick Three Videos, which then goes to Upload Videos to Render.
  2. Verify Upload Videos to Render triggers Pause 10 Seconds, then Retrieve Video Links into Merge Trio Videos.
  3. Ensure Merge Trio Videos outputs to Combine Media Audio Subs.
  4. Confirm Extract Text Content and Rename Data Fields also feed into Combine Media Audio Subs to align media, audio, and subtitles.
Group-check all HTTP calls: Pexels Search Call, Retrieve Video Links, Fetch Upload Links, and Upload Videos to Render should all point to the correct external APIs with valid authentication headers.

Step 5: Build and Render the Final Video

Assemble the timeline, render the video, and return a webhook response.

  1. Ensure Combine Media Audio Subs feeds into Build Video Timeline.
  2. Verify Build Video Timeline outputs to Render Video Request and Merge Timeline with Request in parallel.
  3. Confirm Render Video Request outputs to both Merge Timeline with Request and Delay Until Rendered in parallel.
  4. Check the render polling loop: Delay Until RenderedFetch Rendered VideoRendered CheckSet Final URL.
  5. Ensure Merge Timeline with Request ends at Return Webhook Reply to respond immediately with job details.
⚠️ Common Pitfall: If rendering stalls, verify that Rendered Check routes back to Delay Until Rendered on the “false” branch and that your render API returns a stable status field.

Step 6: Sync Audio Uploads and Readiness Checks

Align TTS uploads with the render service and wait for audio assets to be ready.

  1. Confirm Fetch Upload Links outputs to Merge Uploads and TTS and Await Input Sync in parallel.
  2. Ensure Merge Uploads and TTS sends to Upload TTS to Render, and then into Await Input Sync.
  3. Verify the readiness loop: Await Input SyncPause 15 SecondsRetrieve Audio URLsStatus Ready CheckRename Data Fields.
If Status Ready Check keeps looping, confirm the audio readiness status values expected from Retrieve Audio URLs match your render provider’s API.

Step 7: Test & Activate Your Workflow

Run a manual test, verify outputs, and activate the workflow for production.

  1. Use Manual Execution Start to test with sample input values such as voice, ttsSpeed, redditLink, and videoLength.
  2. Confirm that Return Webhook Reply responds with render request details and that Set Final URL populates the final video link.
  3. Check that Rendered Check loops until rendered, then stops once the final URL is set.
  4. Activate the workflow using the Active toggle and send a real HTTP request to Incoming Webhook Trigger to validate production execution.
🔒

Unlock Full Step-by-Step Guide

Get the complete implementation guide + downloadable template

Common Gotchas

  • Reddit credentials can expire or need specific permissions. If things break, check your Reddit Developer App settings and the token request response first.
  • If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
  • Shotstack renders are asynchronous, so a missing “final URL” usually means the render is still processing. Check the Fetch Rendered Video response and confirm your API key is set in the HTTP Header Auth.

Frequently Asked Questions

How long does it take to set up this Reddit video automation automation?

About 45 minutes if you already have the API keys.

Do I need coding skills to automate Reddit video automation?

No. You’ll mostly paste credentials and test a sample webhook payload. The only “code-like” part is optional prompt and styling tweaks.

Is n8n free to use for this Reddit video automation workflow?

Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI, Pexels, and Shotstack usage costs.

Where can I host n8n to run this automation?

Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.

Can I customize this Reddit video automation workflow for a different voice and visual style?

Yes, and it’s the part you should customize first. Change the voice field in the input payload to switch narration voices, then adjust the OpenAI prompt used for clip structure to match your pacing and tone. For visuals, edit the timeline construction in the Build Video Timeline code node (fonts, colors, fit modes, subtitle styling). If you want tighter videos, tweak the character-length logic around the Limit Comment Length node so clips don’t run long.

Why is my Shotstack connection failing in this workflow?

Most of the time it’s a missing or incorrect API key in the HTTP Header Auth for Shotstack. It can also fail if the upload endpoints return temporary links and the workflow waits too long before uploading, so check the Fetch Upload Links response and timing. Finally, if renders start but never finish, look at the Fetch Rendered Video response for status and error messages.

How many videos can this Reddit video automation automation handle?

It depends more on your n8n plan and Shotstack throughput than the logic itself, but most teams run several videos a day without issues once it’s stable.

Is this Reddit video automation automation better than using Zapier or Make?

For this use case, yes. You need branching, waits, merges, and multi-step media handling, which is where n8n tends to feel less restrictive (and self-hosting avoids per-task pricing headaches). Zapier and Make can work, but they get awkward once you’re juggling async renders, file uploads, and “wait until ready” checks. n8n also makes it easier to keep the whole pipeline in one place, so troubleshooting isn’t a scavenger hunt. If you’re unsure, Talk to an automation expert and we’ll sanity-check the best path for your volume.

Once this is running, a Reddit link becomes a finished vertical video without the editing loop. Set it up once, then spend your time on publishing and testing hooks.

Need Help Setting This Up?

Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.

Lisa Granqvist

Workflow Automation Expert

Expert in workflow automation and no-code tools.

×

Use template

Get instant access to this n8n workflow Json file

💬
Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Get a free quote today!
Get a free quote today!

Tell us what you need and we'll get back to you within one working day.

Launch login modal Launch register modal