Baserow + HeyGen, batch short videos without chaos
Your “simple” short-video process probably isn’t simple anymore. Briefs live in one place, scripts in another, avatars somewhere else, and the status updates get lost in chat threads.
This Baserow HeyGen automation hits Content Managers first, but agency owners and solo marketers feel it too. You end up re-checking details, re-running renders, and still shipping fewer posts than you planned.
This workflow turns a Baserow queue into finished short videos (with optional avatars, captions, visuals, and music), then writes the results back so you always know what’s done and what needs attention.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Baserow + HeyGen, batch short videos without chaos
flowchart LR
subgraph sg0["Basic LLM Chain Flow"]
direction LR
n0@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Should Process?", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Switch ScriptType", pos: "b", h: 48 }
n2@{ icon: "mdi:robot", form: "rounded", label: "Basic LLM Chain", pos: "b", h: 48 }
n3@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-vertical", form: "rounded", label: "Body", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n6@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Leo - Improve Prompt"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Leo - Get imageId"]
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Leo - Generate Image"]
n10@{ icon: "mdi:cog", form: "rounded", label: "Wait1", pos: "b", h: 48 }
n11@{ icon: "mdi:swap-vertical", form: "rounded", label: "Scenes Mapping", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Out", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Runway - Create Video"]
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Runway - Get Video"]
n15@{ icon: "mdi:cog", form: "rounded", label: "Wait2", pos: "b", h: 48 }
n16@{ icon: "mdi:swap-vertical", form: "rounded", label: "loop_over_scenes", pos: "b", h: 48 }
n17["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code"]
n18@{ icon: "mdi:cog", form: "rounded", label: "Wait", pos: "b", h: 48 }
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>json2video : Video Rendering"]
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>json2video : Check Video Ren.."]
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/baserow.svg' width='40' height='40' /></div><br/>Baserow"]
n22["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/baserow.svg' width='40' height='40' /></div><br/>Baserow Processing"]
n23@{ icon: "mdi:swap-vertical", form: "rounded", label: "output", pos: "b", h: 48 }
n24["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/baserow.svg' width='40' height='40' /></div><br/>Update Script"]
n25@{ icon: "mdi:swap-horizontal", form: "rounded", label: "BackgroundType", pos: "b", h: 48 }
n26@{ icon: "mdi:swap-vertical", form: "rounded", label: "output image", pos: "b", h: 48 }
n27@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow2", pos: "b", h: 48 }
n28@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow3", pos: "b", h: 48 }
n29@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow4", pos: "b", h: 48 }
n30["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HeyGen"]
n31@{ icon: "mdi:cog", form: "rounded", label: "Wait4", pos: "b", h: 48 }
n32["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HeyGen : Check Video"]
n33@{ icon: "mdi:swap-horizontal", form: "rounded", label: "heygen_response", pos: "b", h: 48 }
n34@{ icon: "mdi:cog", form: "rounded", label: "Wait6", pos: "b", h: 48 }
n35["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>CaptionsAI1"]
n36["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>CaptionsAI : Check Poll1"]
n37@{ icon: "mdi:cog", form: "rounded", label: "Aggregate", pos: "b", h: 48 }
n38@{ icon: "mdi:swap-horizontal", form: "rounded", label: "j2v_response", pos: "b", h: 48 }
n39@{ icon: "mdi:cog", form: "rounded", label: "json2video Execute ERROR", pos: "b", h: 48 }
n40@{ icon: "mdi:cog", form: "rounded", label: "json2video Execute ERROR1", pos: "b", h: 48 }
n41@{ icon: "mdi:cog", form: "rounded", label: "CAPTIONS Execute ERROR", pos: "b", h: 48 }
n42@{ icon: "mdi:swap-horizontal", form: "rounded", label: "cap_response", pos: "b", h: 48 }
n43["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code Add Sub"]
n44@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow", pos: "b", h: 48 }
n45["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code Heygen"]
n46["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Webhook"]
n47@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If_with_heygen", pos: "b", h: 48 }
n48@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If_with_avatar", pos: "b", h: 48 }
n49@{ icon: "mdi:cog", form: "rounded", label: "heygen Execute ERROR", pos: "b", h: 48 }
n50@{ icon: "mdi:cog", form: "rounded", label: "heygen Execute ERROR2", pos: "b", h: 48 }
n51@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow5", pos: "b", h: 48 }
n52@{ icon: "mdi:cog", form: "rounded", label: "CAPTIONS Execute ERROR1", pos: "b", h: 48 }
n53@{ icon: "mdi:cog", form: "rounded", label: "Execute Workflow6", pos: "b", h: 48 }
n54@{ icon: "mdi:robot", form: "rounded", label: "Basic LLM Chain Manual", pos: "b", h: 48 }
n55@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model1", pos: "b", h: 48 }
n3 --> n54
n3 --> n44
n4 --> n0
n17 --> n47
n18 --> n20
n10 --> n8
n15 --> n14
n31 --> n32
n34 --> n36
n30 --> n31
n30 --> n49
n23 --> n16
n46 --> n4
n37 --> n35
n12 --> n16
n35 --> n34
n35 --> n52
n45 --> n19
n43 --> n19
n42 --> n43
n42 --> n34
n42 --> n41
n38 --> n21
n38 --> n18
n38 --> n40
n26 --> n16
n25 --> n13
n25 --> n26
n48 --> n37
n48 --> n17
n47 --> n30
n47 --> n19
n11 --> n12
n2 --> n11
n2 --> n24
n2 --> n29
n0 --> n1
n0 --> n22
n33 --> n45
n33 --> n31
n16 --> n48
n16 --> n7
n8 --> n25
n6 -.-> n2
n1 --> n3
n1 --> n2
n55 -.-> n54
n14 --> n23
n14 --> n53
n32 --> n33
n32 --> n50
n9 --> n10
n9 --> n28
n7 --> n9
n7 --> n51
n13 --> n15
n13 --> n27
n54 --> n24
n54 --> n11
n54 --> n29
n36 --> n42
n5 -.-> n2
n5 -.-> n54
n19 --> n18
n19 --> n39
n20 --> n38
end
subgraph sg1["When Executed by Another Workflow Flow"]
direction LR
n56["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/baserow.svg' width='40' height='40' /></div><br/>Baserow Error"]
n57@{ icon: "mdi:location-exit", form: "rounded", label: "Stop and Error", pos: "b", h: 48 }
n58@{ icon: "mdi:play-circle", form: "rounded", label: "When Executed by Another Wor..", pos: "b", h: 48 }
n56 --> n57
n58 --> n56
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n58 trigger
class n2,n5,n54 ai
class n6,n55 aiModel
class n0,n1,n3,n25,n33,n38,n42,n47,n48 decision
class n7,n8,n9,n13,n14,n19,n20,n30,n32,n35,n36,n46 api
class n17,n43,n45 code
classDef customIcon fill:none,stroke:none
class n7,n8,n9,n13,n14,n17,n19,n20,n21,n22,n24,n30,n32,n35,n36,n43,n45,n46,n56 customIcon
The Problem: Short-video batching turns into tab chaos
Batching short videos sounds efficient until you actually try to do it. You start with a list of ideas, then you’re bouncing between a database, a doc for scripts, an AI tool for visuals, another AI tool for avatars, and a separate place for captions and exports. Somewhere in that shuffle, a voice setting gets missed, the wrong background style slips in, or a render fails and nobody notices for hours. The worst part is the mental load: you’re not just creating content, you’re babysitting a production line made of browser tabs.
It adds up fast. Here’s where it breaks down.
- Even one short video can require 10+ tiny checks, and each check steals your focus.
- Status tracking becomes a mess because “in progress” lives in someone’s memory, not in your system.
- When you try to scale to a weekly batch, errors multiply, and rework quietly eats the entire time you hoped to save.
- Most teams end up with inconsistent output because settings drift from one video to the next.
The Solution: Queue in Baserow, generate in HeyGen, track everything
This n8n workflow is designed like a small production system. A new request comes in through an incoming webhook (typically tied to a form or a queued record), then the workflow decides how to process it: single video mode for quick turnarounds, or bulk mode when you want to generate a whole batch. Next, it handles script creation (either AI-written via an LLM or pulled from your own input), maps the fields into a clean payload, and generates the media pieces needed for the final edit. Depending on your settings, it can generate visuals, build scenes, request an avatar video from HeyGen, add captions, and assemble the final render. When it’s done, it updates your Baserow record so the whole team sees the output and the status without asking around.
The workflow starts with a queued brief (often stored in Baserow) and routes it based on your chosen script type and video options. It then generates assets, polls external tools until results are ready, and finally writes back the finished output details to Baserow so you can review, retry, or publish.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you batch 20 short videos every Monday. Manually, you might spend about 10 minutes per video just copying the brief, checking voice/avatar settings, exporting files, and updating a tracker, which is roughly 3 hours of admin before the “real work” even counts. With this workflow, you queue the 20 briefs in Baserow and trigger the run once, then n8n handles generation and status checks while you do other work. You’ll still spend time reviewing outputs, but the repetitive tracking and babysitting time drops to a quick scan of the updated Baserow rows.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Baserow for queuing briefs and tracking status
- HeyGen to generate avatar-driven video segments
- OpenAI API key (get it from the OpenAI API dashboard)
Skill level: Intermediate. You’ll connect accounts, paste API keys, and map a few fields so your Baserow columns match your video template.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A queued request kicks things off. A webhook receives a payload (often created from a Baserow form or a “ready to produce” record), then the workflow shapes that into standardized fields it can trust.
The workflow decides what to generate. It routes based on script type (AI-generated vs manual), checks which production path you enabled (HeyGen avatar or an alternate captions route), and prepares the right request bodies for each external tool.
Media generation runs in batches. Scenes can be split and processed in groups, with waits and status checks in between so the workflow doesn’t move on until assets are actually ready. This is where HTTP requests, conditional logic, and merging outputs keep everything aligned.
Results get written back to your system of record. When renders finish (or fail), the workflow updates the Baserow record with output fields and logs errors clearly, so you can retry without guessing what happened.
You can easily modify the Baserow fields and the generation options to match your brand voice and video format. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Webhook Trigger
Set up the entry point so external systems can start the automation.
- Add and open Incoming Webhook Trigger.
- Configure the webhook path and HTTP method expected by your source system.
- Copy the test URL and use it to send a sample request to validate the incoming payload.
Step 2: Connect Baserow
Baserow is used for reading, updating, and logging records during the workflow.
- Open Process Baserow Entry and connect to the correct database and table.
- Open Modify Script Record and configure the record update mapping to store script results.
- Open Update Baserow Record and map the fields to store final render output and status.
- Open Log Baserow Error and map the fields used for error logging.
Step 3: Set Up the Request Mapping and Processing Routes
Normalize the incoming payload and decide which processing path to use.
- In Map Request Body, map fields from the webhook payload into a clean structure used downstream.
- Configure Determine Processing to decide which path the request follows.
- Configure Route Script Type to route the payload to the correct script path.
- Confirm parallel execution: Determine Processing outputs to both Route Script Type and Process Baserow Entry in parallel.
Step 4: Configure the AI/LLM Chains
These nodes generate and parse structured scene data for the workflow.
- Open Primary LLM Chain and configure prompts and inputs for automated script generation.
- Open Manual LLM Chain for the fallback/manual path when Conditional Gate routes to manual processing.
- Ensure Structured Result Parser is connected to both LLM chains for structured output parsing.
- Confirm parallel execution: Primary LLM Chain outputs to both Map Scene Fields and Modify Script Record in parallel, and Manual LLM Chain outputs to both Modify Script Record and Map Scene Fields in parallel.
Step 5: Build the Scene Processing and Media Generation Loop
Transform scenes, split them into batches, and generate images or video backgrounds.
- In Map Scene Fields, map the structured LLM output to scene fields.
- Use Split Scene Items to split each scene into individual items, then loop with Iterate Scene Batch.
- In Refine Prompt Call → Generate Image Request → Pause Image Poll → Retrieve Image ID, configure the image generation/polling flow.
- Route background logic in Check Background Type to either Runway Video Create or Set Image Output.
- Configure the Runway polling sequence: Runway Video Create → Pause Runway Poll → Runway Video Fetch → Set Output Fields → Iterate Scene Batch.
Step 6: Configure Avatar, Captions, and Render Pipeline
Control whether avatars and captions are added, then render the final video.
- Set avatar routing in Check Avatar Enabled, which sends scenes to Aggregate Scenes or Transform Logic.
- Configure Check HeyGen Enabled to route into HeyGen Video Request or directly to Render Video Request.
- Set up HeyGen polling: HeyGen Video Request → Pause HeyGen Poll → Check HeyGen Status → Route HeyGen Response → Prepare HeyGen Payload.
- Configure captions: Aggregate Scenes → CaptionsAI Request → Pause Captions Poll → Check Captions Status → Route Captions Response → Append Subtitles Logic.
- Finalize render flow: Prepare HeyGen Payload or Append Subtitles Logic → Render Video Request → Pause Render Check → Check Render Status → Route Render Response → Update Baserow Record.
Step 7: Connect Sub-Workflow Configuration Nodes
This workflow calls multiple sub-workflows for configuration and error handling.
- Open all executeWorkflow nodes used for configuration: Run Sub-Workflow A (Config), Run Sub-Workflow B (Config), Run Sub-Workflow C (Config), Run Sub-Workflow D (Config), Run Sub-Workflow E (Config), and Run Sub-Workflow F (Config).
- Select the correct target workflows in each node.
- Verify the error-routing sub-workflows: Run Sub-Workflow RenderErr, Run Sub-Workflow RenderErr2, Run Sub-Workflow CaptErr, Run Sub-Workflow CaptErr2, Run Sub-Workflow HeyGenErr, and Run Sub-Workflow HeyGenErr2.
Step 8: Add Error Handling
Ensure error paths log failures and stop execution safely.
- Confirm Triggered by Workflow Call routes into Log Baserow Error for centralized error logging.
- Make sure Log Baserow Error is mapped to record error details, then flows to Stop With Error.
- Check that error branches from HeyGen Video Request, CaptionsAI Request, and Render Video Request connect to their respective sub-workflow error handlers.
Step 9: Test and Activate Your Workflow
Validate the end-to-end flow before enabling it in production.
- Click Execute Workflow and send a sample request to Incoming Webhook Trigger.
- Verify that Determine Processing routes correctly and that either Primary LLM Chain or Manual LLM Chain completes.
- Confirm that the scene loop completes: Map Scene Fields → Split Scene Items → Iterate Scene Batch and that media generation requests succeed.
- Check that Update Baserow Record writes the final render output and status.
- When satisfied, toggle the workflow to Active for production use.
Common Gotchas
- Baserow credentials can expire or need specific permissions. If things break, check your n8n Credentials page and the Baserow token scope first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- HeyGen requests can fail if your payload is missing a required avatar/voice setting. Check the last HTTP response body in n8n, then confirm your HeyGen template settings match the fields you’re mapping.
Frequently Asked Questions
About 30 minutes once your accounts are ready.
No. You’ll mostly connect accounts and map fields from Baserow into the video request.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage (often a few cents per run) and any HeyGen generation costs on your plan.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Most people customize the script prompts in the LLM chain, adjust the mapped fields that control captions and audio, and swap avatar settings in the HeyGen payload preparation. If you want a different intake structure, you can also modify the Baserow table (then update the “Map Request Body” and “Set Output Fields” nodes to match). That’s usually the difference between “it works” and “it ships on-brand.”
Usually it’s an expired API key or a missing required field in the request body. Check the HTTP Request node response in n8n to see the exact error, then confirm your HeyGen template, avatar, and voice identifiers match what you’re sending. If it works for single videos but fails in bulk, rate limits or too-short wait times can also be the culprit, honestly.
If you self-host, there’s no execution limit in n8n, so your practical limit is your server and your HeyGen plan.
Often, yes, because this kind of workflow needs branching logic, batching, and “poll until render is finished” behavior that gets awkward (and pricey) in simpler tools. n8n also gives you more control over field mapping, retries, and error handling, which matters once you’re generating 20 or 200 videos. Zapier or Make can still be fine for a lightweight version, like “new row → send one request → post a notification.” The minute you want bulk generation plus status tracking back in Baserow, n8n is usually the calmer choice. Talk to an automation expert if you want help deciding.
Once this is running, batching stops feeling like a production fire drill. You queue the work, the workflow does the repetitive parts, and Baserow tells you the truth about what’s actually ready.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.