Reddit + Shotstack: ready to post vertical videos
You find a great Reddit thread. Then you lose an hour turning it into something you can actually post. Script, voiceover, subtitles, footage, edits, render. By the time it’s done, the moment is gone.
This Reddit video automation hits content creators first, honestly. But social media managers and editors feel it too, because the “quick repurpose” task never stays quick. This workflow turns a single Reddit link into a vertical video you can publish without living in a timeline.
Below you’ll see exactly what the workflow does, the results you can expect, and what you need to run it in n8n.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Reddit + Shotstack: ready to post vertical videos
flowchart LR
subgraph sg0["Manual Execution Start Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/itemLists.svg' width='40' height='40' /></div><br/>Divide Clip Items"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Pexels Search Call"]
n2@{ icon: "mdi:robot", form: "rounded", label: "AI Text Generator", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Retrieve Video Links"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Pick Three Videos"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Merge Trio Videos"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Fetch Reddit Token"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Retrieve Reddit Thread"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Build Video Timeline"]
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Combine Media Audio Subs"]
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Incoming Webhook Trigger"]
n11@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split TTS Options", pos: "b", h: 48 }
n12["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Custom Transform Script"]
n13@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Reddit Link", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Convert to Reddit API URL"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Join Token and URL"]
n16@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Video Duration", pos: "b", h: 48 }
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Length with Reddit JSON"]
n18["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Return Webhook Reply"]
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Extract Text Content"]
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Clips with TTS"]
n21@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Status Ready Check", pos: "b", h: 48 }
n22@{ icon: "mdi:robot", form: "rounded", label: "Generate Voice Audio", pos: "b", h: 48 }
n23@{ icon: "mdi:cog", form: "rounded", label: "Rename Data Fields", pos: "b", h: 48 }
n24["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Await Input Sync"]
n25["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Uploads and TTS"]
n26["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Retrieve Audio URLs"]
n27["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Fetch Upload Links"]
n28["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Upload TTS to Render"]
n29["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Upload Videos to Render"]
n30@{ icon: "mdi:cog", form: "rounded", label: "Pause 10 Seconds", pos: "b", h: 48 }
n31@{ icon: "mdi:cog", form: "rounded", label: "Pause 15 Seconds", pos: "b", h: 48 }
n32@{ icon: "mdi:play-circle", form: "rounded", label: "Manual Execution Start", pos: "b", h: 48 }
n33["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Limit Comment Length"]
n34["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge Timeline with Request"]
n35["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Render Video Request"]
n36@{ icon: "mdi:cog", form: "rounded", label: "Delay Until Rendered", pos: "b", h: 48 }
n37["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Fetch Rendered Video"]
n38@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Rendered Check", pos: "b", h: 48 }
n39@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set Final URL", pos: "b", h: 48 }
n12 --> n22
n36 --> n37
n10 --> n11
n10 --> n6
n10 --> n16
n10 --> n13
n30 --> n3
n31 --> n26
n38 --> n39
n38 --> n36
n23 --> n9
n0 --> n19
n0 --> n20
n0 --> n1
n22 --> n25
n22 --> n27
n1 --> n4
n26 --> n21
n37 --> n38
n3 --> n5
n24 --> n31
n5 --> n9
n27 --> n25
n27 --> n24
n2 --> n0
n35 --> n34
n35 --> n36
n6 --> n15
n7 --> n17
n19 --> n9
n13 --> n14
n33 --> n2
n16 --> n17
n15 --> n7
n11 --> n20
n28 --> n24
n4 --> n29
n29 --> n30
n8 --> n35
n8 --> n34
n34 --> n18
n20 --> n12
n17 --> n33
n14 --> n15
n21 --> n23
n21 --> n31
n9 --> n8
n25 --> n28
n32 --> n6
n32 --> n13
n32 --> n16
n32 --> n11
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n32 trigger
class n2,n22 ai
class n21,n38 decision
class n1,n3,n6,n7,n10,n18,n26,n27,n28,n29,n35,n37 api
class n4,n5,n8,n12,n14,n19,n33 code
classDef customIcon fill:none,stroke:none
class n0,n1,n3,n4,n5,n6,n7,n8,n9,n10,n12,n14,n15,n17,n18,n19,n20,n24,n25,n26,n27,n28,n29,n33,n34,n35,n37 customIcon
The Problem: Reddit-to-video takes way too long
Turning a Reddit post into a decent vertical video is deceptively heavy. You have to read the thread, decide what to include, write a script that fits a short runtime, generate narration, build subtitles that actually sync, then hunt for B-roll that doesn’t feel random. And after that, you still have to assemble everything, render it, and fix the inevitable “caption timing is off” issue. One video can easily swallow an afternoon, which means you either post less or you post lower-quality content.
It adds up fast. Here’s where it usually breaks down in real life:
- You end up copy-pasting chunks of text into three different tools just to get a usable script.
- Subtitle timing becomes a mini editing project, especially when the voiceover pace changes.
- B-roll searching is a rabbit hole, and “good enough” footage still takes time to collect.
- Rendering and export settings get repeated from scratch, which invites mistakes and re-renders.
The Solution: Reddit thread in, vertical video link out
This workflow automates the full pipeline of turning a Reddit thread into a short vertical video. It starts with a webhook (or a manual run) where you pass in the Reddit link plus a few choices like voice and video length. n8n fetches the thread via the Reddit API, trims and structures the text, then uses OpenAI to summarize and split it into clip-sized beats that fit a short format. For each beat, it generates search queries, pulls matching vertical B-roll from Pexels, creates TTS narration, and builds subtitles that align to the audio. Finally, it uploads media to Shotstack, triggers a render (720×1280), waits for completion, and returns a clean video URL you can post.
The workflow kicks off when you submit a Reddit link and video settings. OpenAI turns the thread into a storyboard of clips, then Pexels, TTS, subtitles, and Shotstack assemble it into a finished vertical video. The output is a single URL, ready for TikTok, Shorts, or Reels.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you turn 5 Reddit threads into videos each week. Manually, a “simple” version is still about 2 hours per video (script, TTS, captions, B-roll, timeline, render), so that’s roughly 10 hours weekly. With this workflow, you spend about 5 minutes submitting the Reddit link and settings, then wait for Shotstack to render. Even if processing takes around 20 minutes per video, your hands-on time drops to under an hour for the week.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Reddit API access to fetch thread and comments
- OpenAI API for summarization, clip structure, and TTS
- Pexels API key (get it from your Pexels developer dashboard)
- Shotstack API key (get it from the Shotstack dashboard)
Skill level: Intermediate. You’ll paste API keys, test a webhook payload, and tweak prompts or styling if you want a specific format.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You trigger it with a Reddit link and settings. The workflow starts from an incoming webhook (or a manual run) and expects fields like redditLink, videoLength, voice, and ttsSpeed.
Reddit content gets fetched and cleaned up. n8n requests a Reddit token, converts the public link into an API URL, pulls the thread JSON, then trims long comment sections so the story fits your target length.
OpenAI turns the thread into clips, narration, and caption text. It generates a structured set of clip items, then produces voice audio per clip and prepares subtitle content that matches what’s being spoken.
Pexels and Shotstack assemble the actual video. Pexels searches run per clip, the workflow selects and merges a small set of relevant vertical videos, uploads audio and footage to Shotstack, builds a timeline, waits for the render, and checks status until the final URL is ready.
You can easily modify the OpenAI prompts to match your channel style, or swap Pexels for another stock media source based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
Set up the inbound trigger so external requests can start the workflow and fan out into parallel branches.
- Add and open Incoming Webhook Trigger.
- Copy the Webhook URL and ensure your caller sends a JSON body with keys like
voice,ttsSpeed,redditLink, andvideoLength. - Confirm that Incoming Webhook Trigger outputs to Split TTS Options, Fetch Reddit Token, Split Video Duration, and Split Reddit Link in parallel.
Step 2: Configure Reddit Ingestion and Text Preparation
Wire the Reddit API flow to fetch the thread, normalize the URL, and limit the text before AI processing.
- Ensure Split Reddit Link feeds Convert to Reddit API URL, which then goes into Join Token and URL.
- Confirm Fetch Reddit Token merges into Join Token and URL and then calls Retrieve Reddit Thread.
- Route Retrieve Reddit Thread into Merge Length with Reddit JSON, along with Split Video Duration.
- Make sure Merge Length with Reddit JSON outputs to Limit Comment Length before the AI step.
Step 3: Set Up AI Text and Voice Generation
Generate the script from Reddit content, split into clip items, and synthesize voice audio.
- Connect Limit Comment Length to AI Text Generator and confirm it outputs into Divide Clip Items.
- From Divide Clip Items, verify the parallel outputs to Extract Text Content, Merge Clips with TTS, and Pexels Search Call.
- Ensure Merge Clips with TTS sends data to Custom Transform Script, then into Generate Voice Audio.
- Confirm Generate Voice Audio outputs to both Merge Uploads and TTS and Fetch Upload Links in parallel.
Step 4: Configure Video Search and Media Assembly
Search for footage, select videos, and combine them with audio and subtitles into a single timeline.
- Confirm Pexels Search Call feeds into Pick Three Videos, which then goes to Upload Videos to Render.
- Verify Upload Videos to Render triggers Pause 10 Seconds, then Retrieve Video Links into Merge Trio Videos.
- Ensure Merge Trio Videos outputs to Combine Media Audio Subs.
- Confirm Extract Text Content and Rename Data Fields also feed into Combine Media Audio Subs to align media, audio, and subtitles.
Step 5: Build and Render the Final Video
Assemble the timeline, render the video, and return a webhook response.
- Ensure Combine Media Audio Subs feeds into Build Video Timeline.
- Verify Build Video Timeline outputs to Render Video Request and Merge Timeline with Request in parallel.
- Confirm Render Video Request outputs to both Merge Timeline with Request and Delay Until Rendered in parallel.
- Check the render polling loop: Delay Until Rendered → Fetch Rendered Video → Rendered Check → Set Final URL.
- Ensure Merge Timeline with Request ends at Return Webhook Reply to respond immediately with job details.
Step 6: Sync Audio Uploads and Readiness Checks
Align TTS uploads with the render service and wait for audio assets to be ready.
- Confirm Fetch Upload Links outputs to Merge Uploads and TTS and Await Input Sync in parallel.
- Ensure Merge Uploads and TTS sends to Upload TTS to Render, and then into Await Input Sync.
- Verify the readiness loop: Await Input Sync → Pause 15 Seconds → Retrieve Audio URLs → Status Ready Check → Rename Data Fields.
Step 7: Test & Activate Your Workflow
Run a manual test, verify outputs, and activate the workflow for production.
- Use Manual Execution Start to test with sample input values such as
voice,ttsSpeed,redditLink, andvideoLength. - Confirm that Return Webhook Reply responds with render request details and that Set Final URL populates the final video link.
- Check that Rendered Check loops until rendered, then stops once the final URL is set.
- Activate the workflow using the Active toggle and send a real HTTP request to Incoming Webhook Trigger to validate production execution.
Common Gotchas
- Reddit credentials can expire or need specific permissions. If things break, check your Reddit Developer App settings and the token request response first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Shotstack renders are asynchronous, so a missing “final URL” usually means the render is still processing. Check the Fetch Rendered Video response and confirm your API key is set in the HTTP Header Auth.
Frequently Asked Questions
About 45 minutes if you already have the API keys.
No. You’ll mostly paste credentials and test a sample webhook payload. The only “code-like” part is optional prompt and styling tweaks.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI, Pexels, and Shotstack usage costs.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s the part you should customize first. Change the voice field in the input payload to switch narration voices, then adjust the OpenAI prompt used for clip structure to match your pacing and tone. For visuals, edit the timeline construction in the Build Video Timeline code node (fonts, colors, fit modes, subtitle styling). If you want tighter videos, tweak the character-length logic around the Limit Comment Length node so clips don’t run long.
Most of the time it’s a missing or incorrect API key in the HTTP Header Auth for Shotstack. It can also fail if the upload endpoints return temporary links and the workflow waits too long before uploading, so check the Fetch Upload Links response and timing. Finally, if renders start but never finish, look at the Fetch Rendered Video response for status and error messages.
It depends more on your n8n plan and Shotstack throughput than the logic itself, but most teams run several videos a day without issues once it’s stable.
For this use case, yes. You need branching, waits, merges, and multi-step media handling, which is where n8n tends to feel less restrictive (and self-hosting avoids per-task pricing headaches). Zapier and Make can work, but they get awkward once you’re juggling async renders, file uploads, and “wait until ready” checks. n8n also makes it easier to keep the whole pipeline in one place, so troubleshooting isn’t a scavenger hunt. If you’re unsure, Talk to an automation expert and we’ll sanity-check the best path for your volume.
Once this is running, a Reddit link becomes a finished vertical video without the editing loop. Set it up once, then spend your time on publishing and testing hooks.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.