Supabase + OpenAI: X posts in your real voice
Writing on X sounds simple until you are staring at a blank draft, trying to “sound like you” for the fifth time this week. Then you scroll old posts, copy phrases into a notes app, and hope the new version doesn’t read like generic AI.
This Supabase OpenAI automation hits solo creators first, but agency strategists and in-house marketers feel it too. You get a repeatable way to generate posts, replies, and image prompts that stay consistent with your real tone, without re-training your brain every morning.
This workflow turns your past writing into a lightweight “knowledge base,” then uses it as context every time you request new content. Below is the flow, what it replaces, and how teams use it day-to-day.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Supabase + OpenAI: X posts in your real voice
flowchart LR
subgraph sg0["Form: Add to KB Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Form: Add to KB"]
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Normalize (ingest)", pos: "b", h: 48 }
n2@{ icon: "mdi:vector-polygon", form: "rounded", label: "Embeddings OpenAI (ingest)", pos: "b", h: 48 }
n3@{ icon: "mdi:robot", form: "rounded", label: "Document Loader (+metadata)", pos: "b", h: 48 }
n4@{ icon: "mdi:robot", form: "rounded", label: "Text Splitter", pos: "b", h: 48 }
n5@{ icon: "mdi:cube-outline", form: "rounded", label: "VectorStore (Supabase)", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>End Page (ingest)"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Form: Generate"]
n8@{ icon: "mdi:swap-vertical", form: "rounded", label: "Build Params", pos: "b", h: 48 }
n9@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n10@{ icon: "mdi:cube-outline", form: "rounded", label: "KB (Supabase VectorStore)", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Generator Agent", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Edit Fields (format HTML)", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>End Page (generate)"]
n8 --> n11
n4 -.-> n3
n7 --> n8
n0 --> n1
n11 --> n12
n9 -.-> n11
n1 --> n5
n5 --> n6
n12 --> n13
n10 -.-> n11
n2 -.-> n5
n2 -.-> n10
n3 -.-> n5
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n7 trigger
class n3,n4,n11 ai
class n9 aiModel
class n5,n10 ai
class n2 ai
classDef customIcon fill:none,stroke:none
class n0,n6,n7,n13 customIcon
The Problem: “On-brand” content still takes forever
Most content bottlenecks are not about ideas. They’re about translation. You know what you want to say, but turning it into something that matches your voice (your pacing, your structure, your favorite phrasing) takes repeated passes. And if you hand it to AI without context, it often comes back polished and wrong. Then you edit, second-guess, and lose the whole point of “saving time.” Multiply that by daily posts, replies, and quote tweets, and you’re spending hours each week redoing work you already did in the past.
The friction compounds. Here’s where it usually breaks down.
- You end up rewriting AI drafts from scratch because the tone is “close” but not yours.
- Good past posts live in random places (X bookmarks, docs, screenshots), so you can’t reuse your own best patterns reliably.
- Replies take the most energy because they need context, restraint, and personality, not just information.
- Consistency drops when you’re busy, which means your feed starts feeling like multiple authors.
The Solution: A self-learning X voice engine in n8n
This workflow builds a simple retrieval system around your own writing so OpenAI can draft in your voice with much less guesswork. You start by uploading past posts or notes through a built-in “Add to KB” form. The workflow standardizes that text, creates embeddings (think: a searchable “meaning fingerprint”), and stores everything in Supabase using a vector table called documents. When you want new content, you open a second form, describe what you need, and the AI Agent pulls the most relevant examples from your Supabase knowledge base. It then generates a post, a quote, a reply, and an image prompt, and displays the results on a clean HTML results page.
The workflow begins with two forms: one for ingesting your best writing, one for requesting fresh content. Supabase acts as the long-term memory, while OpenAI provides the drafting. Finally, n8n formats the output so you can copy, tweak, and publish.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you publish 5 posts a week and you also try to reply thoughtfully to 10 threads. A “manual but assisted” process often means digging up 3 past examples (maybe 5 minutes each), drafting, then rewriting for tone, which can easily hit about 2 hours per week. With this workflow, you spend about 10 minutes adding your best posts to the knowledge base once, then each new request is a quick form submission plus a short wait for generation. Many teams get back roughly an hour a week right away, and more as the knowledge base grows.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Supabase to store vectors in a documents table.
- OpenAI for embeddings and the chat model.
- Supabase URL + Service Key (get it from Supabase project settings).
Skill level: Intermediate. You’ll connect accounts, set table names, and verify permissions, but you won’t be writing code.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You submit content to the knowledge base. The “KB Intake Form” is the trigger. You paste in past posts or notes, plus optional metadata like topic or style, so the workflow has something real to learn from.
Your writing gets cleaned up and prepared for search. n8n standardizes the intake data, then the text is split into sensible chunks (so long posts don’t become one messy blob). OpenAI generates embeddings for each chunk.
Supabase stores the memory. Those chunks and embeddings are saved to Supabase Vector Store (using pgvector). If Row Level Security is enabled, your API key needs insert/select permission on the documents table or retrieval will quietly fail.
You request new content and get a formatted result. The “Content Request Form” collects the prompt, the Agent retrieves the most relevant voice snippets from Supabase, then the OpenAI chat model drafts a post, quote, reply, and image prompt. Finally, the output is formatted as HTML and shown on a results page you can copy from.
You can easily modify the form fields to capture extra context like audience, offer, or links you want included. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Triggers
This workflow uses two form triggers to ingest knowledge base content and request new content generation.
- Open KB Intake Form and set Form Title to
Add to KBand Form Description toPaste your past posts or notes. Optionally tag topic and mark as style sample.. - In KB Intake Form, confirm fields include a required content textarea and optional topic text input.
- Open Content Request Form and set Form Title to
Generate Postsand Form Description toCreate post/quote/reply/image_prompt from your knowledge base.. - In Content Request Form, confirm fields for topic, topK (number), and hint exist.
Step 2: Connect Supabase for Vector Storage and Retrieval
Supabase is used for storing embeddings and retrieving context during generation.
- Open Supabase Vector Storage and set Mode to
insertand Table Name todocuments. - Credential Required: Connect your supabaseApi credentials in Supabase Vector Storage.
- Open KB Supabase Retrieval Tool and set Mode to
retrieve-as-tool, Tool Name tokb_vectorstore, and Tool Description toKB search over Supabase `documents` (use filters like {"topic":"creator_mindset","style":"true"}). - Credential Required: Connect your supabaseApi credentials in KB Supabase Retrieval Tool.
Step 3: Set Up the Knowledge Ingestion Pipeline
This pipeline takes intake form content, standardizes it, splits it, embeds it, and stores it in Supabase.
- In Standardize Intake Data, set Mode to
rawand JSON Output to={{ ({ content: $json.fields.content, topic: ($json.fields.topic || ''), style: 'true' }) }}. - In Metadata Document Loader, set JSON Data to
={{ $json.content }}and ensure metadata includes source:user_ingest, style:true, and topic:={{ $json.topic || '' }}. - In Recursive Text Segmenter, set Chunk Size to
1200and Chunk Overlap to150. - Credential Required: Connect your openAiApi credentials in OpenAI Embedding Builder.
- Confirm the execution flow: KB Intake Form → Standardize Intake Data → Supabase Vector Storage → Intake Completion Page.
Step 4: Set Up AI Generation and Retrieval
This path collects request inputs, retrieves relevant snippets from Supabase, and generates output using an AI agent.
- In Assemble Generation Inputs, set Mode to
rawand JSON Output to={{ ({ topic: $json.fields.topic || '', style: 'true', topK: Number($json.fields.topK || 5), hint: $json.fields.hint || '', filters: { topic: ($json.fields.topic || ''), style: true } }) }}. - In Chat Model Configuration, set Model to
gpt-4.1-mini. - Open Content Generator Agent and keep the Text prompt exactly as defined (it includes expressions like
{{$json.topK || 5}}and{{$json.filters}}). - Confirm the tool connection: KB Supabase Retrieval Tool is attached to Content Generator Agent as an AI tool.
Step 5: Configure Output Rendering
The generated JSON is converted to HTML and displayed in a completion page.
- In Format Output as HTML, set Mode to
rawand keep JSON Output set to the full HTML-builder expression that parsescur.outputand buildscompletionTitleandcompletionMessage. - In Generation Results Page, set Operation to
completion, Completion Title to={{ $json.completionTitle }}, and Completion Message to={{ $json.completionMessage }}. - Confirm the execution flow: Content Generator Agent → Format Output as HTML → Generation Results Page.
Step 6: Test and Activate Your Workflow
Run both the ingestion and generation paths to validate storage, retrieval, and output formatting.
- Click Execute Workflow and submit the KB Intake Form with sample content and a topic.
- Verify a successful run shows Intake Completion Page with a title set to
={{ $json.metadata.topic }}and the stored page content. - Submit the Content Request Form with a topic and optional hint, then confirm Generation Results Page shows the formatted HTML sections for Post, Quote, Reply, and Image Prompt.
- After testing, toggle the workflow to Active so both forms are live for production use.
Common Gotchas
- Supabase credentials can expire or use the wrong key type. If things break, check your Supabase project settings (API keys) and confirm the workflow is pointing at the right project URL.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 15 minutes if your Supabase table is ready.
No. You’ll connect Supabase and OpenAI, then copy in a few IDs and keys. The forms handle the input, and n8n handles the rest.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API usage costs, which are usually small for short generations and embeddings.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, but you’ll want to adjust the agent instructions and the output formatter. In n8n, update the “Content Generator Agent” prompt to request a 5–8 tweet thread with a hook, numbered beats, and a CTA, then change “Format Output as HTML” so it renders each tweet as its own block. Common upgrades include adding a “link to include” field in the Content Request Form, forcing a max character count per tweet, and saving outputs back into Supabase for reuse later.
Usually it’s the key or permissions. Confirm you’re using the correct Supabase project URL, the key has insert/select rights on the documents table, and pgvector is enabled. If Row Level Security is turned on, your policies must allow the operations the workflow is trying to do, otherwise retrieval comes back empty and the agent “forgets” your voice.
Practically, a lot. On n8n Cloud Starter you are limited by monthly executions, while self-hosting has no execution cap (it depends on your server). Supabase vector search is fast for a personal knowledge base, and most creators run this comfortably with hundreds or thousands of stored snippets. If you plan to ingest years of content, keep chunks tidy and be mindful of OpenAI embedding costs during bulk imports.
For this workflow, yes, because it relies on an AI Agent plus vector retrieval, not just moving fields between apps. n8n also makes it easier to self-host, which is helpful when you start generating a lot. Zapier and Make can work if you only want a basic “prompt in, text out” flow, but they get clunky when you add retrieval, formatting, and branching. Honestly, the right choice depends on volume and how picky you are about voice quality. Talk to an automation expert if you’re not sure which fits.
Once your best writing is stored in Supabase, the workflow stops you from reinventing your voice every time you publish. Set it up, add examples as you go, and let the drafting feel easy again.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.