Postgres to GitHub, versioned CSV backups made easy
Manual database exports sound simple until you miss one table, overwrite the wrong file, or realize last week’s “backup” never actually ran.
This Postgres GitHub backup automation hits developers first, but data analysts and ops-minded founders feel it too. You get a clean, versioned CSV snapshot in a GitHub repo every day, without someone babysitting pgAdmin at 6pm.
Below is the exact workflow logic, what it produces, and where teams usually tweak it so it matches their repo structure and naming conventions.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Postgres to GitHub, versioned CSV backups made easy
flowchart LR
subgraph sg0["Scheduled Daily Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "Scheduled Daily Trigger", pos: "b", h: 48 }
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/postgres.svg' width='40' height='40' /></div><br/>Fetch Table Records"]
n2@{ icon: "mdi:swap-vertical", form: "rounded", label: "Iterate Batches", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Transform Script"]
n4@{ icon: "mdi:cog", form: "rounded", label: "Generate CSV File", pos: "b", h: 48 }
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/postgres.svg' width='40' height='40' /></div><br/>Retrieve Table Catalog"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>List Repo Files GitHub"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/itemLists.svg' width='40' height='40' /></div><br/>Aggregate Repo Filenames"]
n8@{ icon: "mdi:swap-vertical", form: "rounded", label: "Split Single Records", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Verify Repo File Presence", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>Update GitHub File"]
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/github.dark.svg' width='40' height='40' /></div><br/>Upload GitHub File"]
n3 --> n1
n1 --> n4
n5 --> n2
n0 --> n6
n2 --> n8
n2 --> n3
n4 --> n2
n10 --> n8
n11 --> n8
n8 --> n9
n7 --> n5
n9 --> n10
n9 --> n11
n6 --> n7
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n9 decision
class n1,n5 database
class n3 code
classDef customIcon fill:none,stroke:none
class n1,n3,n5,n6,n7,n10,n11 customIcon
The Problem: Database “Backups” That Aren’t Actually Reliable
Most small teams don’t fail at backups because they don’t care. They fail because the process is annoying, easy to forget, and weirdly fragile. Someone exports a table, downloads a CSV, renames it “final-final.csv”, and drops it into a shared folder that nobody audits. Then the schema changes. A new table appears. Or a critical table grows and the export takes longer, so it gets skipped “just this once”. A month later, you need last Tuesday’s data and you discover your snapshots are incomplete, inconsistent, or missing entirely.
The friction compounds. Here’s where it breaks down in real life.
- Exports don’t scale with your schema, so new public tables quietly never get included.
- CSV files get overwritten or scattered into messy folders, which makes “what changed?” painful to answer.
- Manual snapshots invite human error, and a single bad export can look “fine” until you try to use it.
- You lose time doing busywork instead of using the data, and honestly that’s the most expensive part.
The Solution: Daily Postgres-to-GitHub CSV Snapshots
This n8n workflow runs on a daily schedule and turns every table in your Postgres public schema into a CSV file stored in a GitHub repository. First, it checks what’s already in the repo, so it can avoid duplicate files and decide whether it should update an existing table export or create a new one. Then it pulls the current list of tables from Postgres, loops through them in batches, extracts the rows for each table, and converts that data into a CSV file. Finally, it either updates the matching file in GitHub (so history is preserved as commits) or uploads a brand-new CSV if the table didn’t exist yesterday.
The workflow starts with a schedule trigger. In the middle, it compares “tables in Postgres” vs. “files in GitHub” and generates one CSV per table. At the end, GitHub becomes your daily, versioned snapshot source of truth.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you have 20 public tables and you export them manually once a day. Even if each table takes only about 5 minutes to query, export, name correctly, and upload, that’s roughly 100 minutes daily (and that’s on a “nothing went weird” day). With this workflow, the “work” is basically zero: the schedule trigger runs automatically, then you just let it process for a bit in the background. Most teams go from “someone loses an hour every morning” to “check GitHub if you’re curious.”
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Postgres for the source tables in the
publicschema. - GitHub to store versioned CSV snapshots in a repo.
- Postgres credentials (create a read-only user in your DB)
Skill level: Beginner. You’ll connect Postgres and GitHub in n8n and choose the target repo, no coding required.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A daily schedule kicks things off. n8n runs the workflow every 24 hours, so you don’t rely on someone remembering to do exports after standup.
GitHub is checked first. The workflow lists files in your target repo and compiles the filenames, which means it can decide what needs to be updated versus created from scratch.
Postgres tables are discovered and exported. It queries Postgres for the current list of tables in the public schema, then loops through them in manageable batches, selects rows from each table, and converts the result into a CSV file.
Files are updated or uploaded. If a CSV already exists for that table, the workflow updates the GitHub file (creating a commit). If it’s a new table, it uploads a new file into the repo.
You can easily modify the schema, naming rules, or repo folder structure to match your internal conventions. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Scheduled Trigger
Set the workflow to run automatically on a daily schedule.
- Add and open Scheduled Daily Trigger.
- Set Rule to an interval with hours and hoursInterval set to
24.
Step 2: Connect GitHub and Catalog Existing Files
Pull the existing repository filenames so the workflow can decide whether to update or upload backups.
- Open List Repo Files GitHub and set Owner to
userand Repository togithub-repo. - Keep Resource set to
fileand Operation set tolist. - Leave File Path as
=to list root files. - Credential Required: Connect your githubOAuth2Api credentials in List Repo Files GitHub.
- Open Aggregate Repo Filenames and set Operation to
aggregateItemswith Fields To Aggregate set toname.
Step 3: Connect Postgres and Load the Table Catalog
Query your database to list all public tables that should be backed up.
- Open Retrieve Table Catalog and set Schema to
information_schema. - Set Table to
tablesand add a Where filter with columntable_schemaand valuepublic. - Credential Required: Connect your postgres credentials in Retrieve Table Catalog.
- Open Iterate Batches (first instance) to process the catalog in batches.
Step 4: Extract Table Records and Build CSV Files
Loop through each table, fetch records, and convert them to CSV.
- Open Transform Script and keep JavaScript Code set to
return $input.all();. - Open Fetch Table Records and set Operation to
selectwith Return All enabled. - Set Table to the expression
{{ $json.table_name }}and Schema topublic. - Credential Required: Connect your postgres credentials in Fetch Table Records.
- Open Generate CSV File and set Binary Property Name to
=data. - Set File Name to
{{ $('Retrieve Table Catalog').item.json.table_name }}so each CSV is named after its table. - Connect Generate CSV File to Iterate Batches as shown in the workflow.
public correctly.Step 5: Split Records and Route to GitHub Update or Upload
Process one CSV at a time and decide whether to update an existing file or upload a new one.
- Open Split Single Records and set Batch Size to
1. - Open Verify Repo File Presence and set the string condition to:
- Value 1:
{{ $node['Aggregate Repo Filenames'].json.name }} - Value 2:
{{ $binary.data.fileName }} - Operation:
contains - Note that Verify Repo File Presence outputs to both Update GitHub File and Upload GitHub File depending on the condition.
Step 6: Configure GitHub Update and Upload Actions
Write CSV backups into your GitHub repository with unique commit messages.
- Open Update GitHub File and set Resource to
fileand Operation toedit. - Set File Path to
{{ $binary.data.fileName }}and enable Binary Data. - Set Commit Message to
=backup-{{ $now.toMillis() }}. - Credential Required: Connect your githubOAuth2Api credentials in Update GitHub File.
- Open Upload GitHub File and set Resource to
filewith Binary Data enabled. - Set File Path to
{{ $binary.data.fileName }}. - Set Commit Message to
=backup-{{ $node['Set commit date'].json.commitDate }}. - Credential Required: Connect your githubOAuth2Api credentials in Upload GitHub File.
{{ $node['Set commit date'].json.commitDate }} references a node not present in this workflow. Replace it with a valid expression (for example, {{ $now.toMillis() }}) or add the missing node before running.Step 7: Test & Activate Your Workflow
Validate the end-to-end backup flow and enable the automation.
- Click Execute Workflow to run a manual test from Scheduled Daily Trigger.
- Confirm that List Repo Files GitHub returns file names and that Retrieve Table Catalog returns public tables.
- Verify that Generate CSV File outputs a binary file named after each table.
- Check that Verify Repo File Presence routes files to Update GitHub File or Upload GitHub File correctly.
- Open your GitHub repository and confirm new or updated CSV files with the expected commit message.
- Toggle the workflow Active to enable daily backups.
Common Gotchas
- GitHub credentials can expire or need specific permissions. If things break, check your n8n GitHub credential and the repo access scope first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your Postgres and GitHub access are ready.
No. You connect accounts and adjust a couple of configuration fields in n8n.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in GitHub and Postgres usage (usually $0 for typical API access).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s a common tweak. You can change the Postgres catalog query in the “Retrieve Table Catalog” node to target a different schema, and you can adjust the destination path in the “Upload GitHub File” and “Update GitHub File” nodes to write into a subfolder like /snapshots/public/. Many teams also modify the “Transform Script” node to standardize filenames (lowercase, underscores) and to skip tables they don’t want exported.
Usually it’s an OAuth scope or repo permission issue. Reconnect the GitHub credential in n8n, confirm you granted access to the target repository, and then re-run the “List Repo Files GitHub” node to verify it can read contents. If updates fail but listing works, check whether the workflow is trying to write to a protected branch. Rate limits are also possible if you’re exporting lots of tables at once, so batching helps.
Dozens is normal, and hundreds can work if your database and repo are sized for it.
Often, yes, because this job is more than a simple two-step zap. n8n handles looping through tables, branching logic, and file updates cleanly, and you can self-host for unlimited executions. Zapier and Make can do parts of it, but you’ll usually end up juggling iterators, file transforms, and pricing limits. If you already live in GitHub and care about version history, n8n is a practical fit. Talk to an automation expert if you’re not sure which fits.
Once this is running, GitHub quietly becomes your daily snapshot library. Set it up, let it commit, and use your time for work that actually moves things forward.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.