Perplexity to Ghost, cited drafts ready to publish
Your content pipeline probably breaks in the same place every time: research. Links get dropped in Slack, sources get lost, and you end up rewriting sections because nobody can prove where a claim came from. That’s how “quick drafts” turn into long editing days.
This Perplexity Ghost automation hits content marketers first, but founders shipping thought leadership and agency leads building client posts feel it too. You get a structured draft with citations, already organized, so review is about polishing instead of detective work.
Below you’ll see exactly what the workflow does, the real-world time savings, and what you need to connect so your next topic becomes a publish-ready Ghost draft with sources intact.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Perplexity to Ghost, cited drafts ready to publish
flowchart LR
subgraph sg0["n8n Form Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/merge.svg' width='40' height='40' /></div><br/>Merge chapters title and text"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Final article text"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>n8n Form Trigger"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/ghost.svg' width='40' height='40' /></div><br/>Ghost"]
n4@{ icon: "mdi:robot", form: "rounded", label: "Create title", pos: "b", h: 48 }
n7@{ icon: "mdi:wrench", form: "rounded", label: "Perplexity_tool", pos: "b", h: 48 }
n8@{ icon: "mdi:wrench", form: "rounded", label: "Perplexity_tool1", pos: "b", h: 48 }
n10@{ icon: "mdi:robot", form: "rounded", label: "Research Leader 🔬", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Structured Output Parser", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Delegate to Research Assista..", pos: "b", h: 48 }
n13@{ icon: "mdi:robot", form: "rounded", label: "Research Assistant", pos: "b", h: 48 }
n14@{ icon: "mdi:robot", form: "rounded", label: "Project Planner", pos: "b", h: 48 }
n15@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n16@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model1", pos: "b", h: 48 }
n17@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model2", pos: "b", h: 48 }
n18@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model3", pos: "b", h: 48 }
n19@{ icon: "mdi:robot", form: "rounded", label: "Editor", pos: "b", h: 48 }
n20@{ icon: "mdi:wrench", form: "rounded", label: "Perplexity_tool2", pos: "b", h: 48 }
n19 --> n4
n4 --> n3
n7 -.-> n10
n14 --> n12
n8 -.-> n13
n20 -.-> n14
n2 --> n10
n15 -.-> n14
n1 --> n19
n16 -.-> n19
n16 -.-> n4
n17 -.-> n10
n18 -.-> n13
n13 --> n0
n10 --> n14
n11 -.-> n14
n0 --> n1
n12 --> n0
n12 --> n13
end
subgraph sg1["Execute Workflow Flow"]
direction LR
n5@{ icon: "mdi:play-circle", form: "rounded", label: "Execute Workflow Trigger", pos: "b", h: 48 }
n6@{ icon: "mdi:swap-vertical", form: "rounded", label: "Response", pos: "b", h: 48 }
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Perplexity API"]
n9 --> n6
n5 --> n9
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n2,n5 trigger
class n4,n10,n11,n13,n14,n19 ai
class n15,n16,n17,n18 aiModel
class n7,n8,n20 ai
class n9 api
class n1 code
classDef customIcon fill:none,stroke:none
class n0,n1,n2,n3,n9 customIcon
The Problem: Research Takes Longer Than Writing
Writing is rarely the bottleneck. It’s the prep work that drags: searching, validating sources, pulling quotes, and building an outline that doesn’t fall apart halfway through. Then comes the messy part. Someone pastes a stat with no link. Another person adds a claim from “a study” without the study. By the time your draft reaches review, the editor isn’t editing. They’re chasing citations, fixing structure, and trying to guess what the author meant.
It adds up fast. Here’s where it breaks down when you keep it manual.
- You lose time re-finding sources because the original links weren’t captured cleanly.
- Outlines change mid-draft, which means whole sections get rewritten instead of revised.
- Citations end up inconsistent across writers, so reviewers can’t trust the draft on first pass.
- Publishing becomes a separate “admin task” because the final copy still has to be moved into Ghost.
The Solution: Perplexity Research + Multi-Agent Drafting, Sent to Ghost
This workflow turns one topic into a complete, research-backed article draft, then creates a draft post in Ghost for review. It starts with a simple input (through an n8n form trigger), then a “lead analyst” agent plans the approach and builds a table of contents. A project planning agent breaks the outline into sections. From there, research support agents use Perplexity (via an HTTP API call) to gather credible sources and citations for each section. Finally, an editorial agent compiles everything into a coherent narrative, generates a headline, and publishes the result as a Ghost draft so it’s ready for your normal approval and publishing process.
The workflow begins when you submit a topic. It then coordinates research and drafting across multiple agents, merges section outputs into one article, and pushes the final cited draft straight into Ghost. No copy-paste relay race.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you publish two researched posts a week. Manually, a typical process looks like 30 minutes building an outline, about 2 hours collecting sources and quotes, and another hour cleaning citations and formatting before the editor can even start. That’s roughly 3–4 hours of prep per article. With this workflow, you submit the topic in a couple minutes, let the agents research and draft in the background (often around 20–40 minutes), and the cited draft lands in Ghost ready for review. You still edit, but you’re editing content, not hunting links.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Ghost for storing drafts and publishing workflow.
- OpenRouter (Perplexity access) to run Perplexity API research calls.
- OpenRouter API key (get it from your OpenRouter dashboard under API keys).
Skill level: Intermediate. You will connect a few accounts, add an API key, and test a run end-to-end in n8n.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You submit a topic through a form trigger. That input can include optional guidance like tone, target word count, or number of sections, so the draft matches the way you publish.
A lead “research analyst” creates the plan. The agent builds a table of contents and hands it to a project planner that turns the outline into discrete section tasks. Small pieces. Easier to control.
Perplexity is used for research with citations. The workflow calls Perplexity through an HTTP request (via OpenRouter), then decodes structured outputs so sources, claims, and section notes stay organized. Those section results get merged into a single narrative.
The editor agent compiles and ships the draft to Ghost. A headline is generated, then the workflow publishes a draft post in Ghost so your team can review, tweak, and publish using your existing process.
You can easily modify the topic input fields to include things like “audience,” “product mention rules,” or a strict style guide based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Trigger
This workflow starts when a user submits the form, sending the payload into the research pipeline.
- Add and open Form Input Trigger.
- Keep default form settings unless you need additional fields for your inputs.
- Confirm Form Input Trigger connects to Lead Research Analyst 🔬 as shown in the workflow.
Step 2: Set Up the Primary Research and Planning Agents
These nodes interpret the form input, structure the research, and plan tasks.
- Open Lead Research Analyst 🔬 and confirm it is connected to Project Planning Agent.
- In Project Planning Agent, ensure OpenAI Chat Engine is connected as the language model.
- Attach Structured Output Decoder to Project Planning Agent as the output parser.
- Attach Perplexity Helper Tool 3 to Project Planning Agent as a tool.
- Confirm OpenAI Chat Engine 3 is connected as the language model for Lead Research Analyst 🔬.
Step 3: Configure Task Assignment and Parallel Research
Research tasks are split and processed in parallel to speed up section drafting.
- Open Assign Research Tasks and confirm it follows Project Planning Agent.
- Ensure Assign Research Tasks outputs to both Combine Section Outputs and Research Support Agent in parallel.
- Verify Research Support Agent is connected to OpenAI Chat Engine 4 as its language model.
- Attach Perplexity Helper Tool 2 to Research Support Agent as a tool.
Step 4: Assemble and Review the Narrative
Outputs are merged, assembled into a narrative, and edited before headline generation.
- Open Combine Section Outputs and keep default merge behavior unless you require a specific merge mode.
- Open Assemble Final Narrative and paste your narrative assembly logic into the code editor if needed.
- Confirm Assemble Final Narrative connects to Editorial Review Agent.
- Ensure OpenAI Chat Engine 2 is connected as the language model for both Editorial Review Agent and Generate Headline.
Step 5: Configure Publishing to Ghost
The edited narrative and generated headline are sent to Ghost for publishing.
- Open Publish to Ghost and map the headline and body fields from Generate Headline.
- Keep the connection from Generate Headline to Publish to Ghost intact.
Step 6: Configure the Perplexity API Helper Flow
This workflow also supports an API helper flow triggered by an internal workflow trigger.
- Open Workflow Trigger Start and confirm it connects to Perplexity API Call.
- Configure Perplexity API Call with your request settings and verify it feeds Prepare API Response.
- Open Prepare API Response and shape the response payload you want to return.
- Note that Perplexity Helper Tool is attached as a tool to Lead Research Analyst 🔬.
Step 7: Test & Activate
Verify the workflow end-to-end before turning it on for production use.
- Click Execute Workflow and submit a sample entry via Form Input Trigger.
- Confirm that Assign Research Tasks runs and that Combine Section Outputs and Research Support Agent execute in parallel.
- Verify the narrative flows through Assemble Final Narrative, Editorial Review Agent, and Generate Headline before reaching Publish to Ghost.
- Trigger Workflow Trigger Start manually to validate the Perplexity API Call and Prepare API Response output.
- When satisfied, toggle the workflow to Active for production use.
Common Gotchas
- Perplexity (via OpenRouter) credentials can expire or be pasted incorrectly. If things break, check the Authorization header format (Bearer <api-key>) in your Perplexity API HTTP Request node first.
- If you’re using Wait nodes or external processing, response times vary. Bump up the wait duration if downstream nodes fail on empty responses, especially when multiple section research tasks run at once.
- Default prompts in AI nodes are generic. Add your brand voice early (tone, examples you like, banned phrases), or you’ll be editing outputs forever.
Frequently Asked Questions
About 30–60 minutes if you already have your API keys and Ghost access ready.
No. You will mainly connect credentials and edit a few text fields. If you can copy an API key and test a form submission, you can run this.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenRouter/Perplexity usage, which is usually a few dollars for several long drafts depending on depth.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Update the prompts used by the research and editorial agents so they follow your voice, headings, and citation preferences. Common tweaks include enforcing a fixed outline style, adding “do not mention competitors,” and setting a hard word count per section so drafts don’t sprawl.
Most of the time it’s an API key issue: the Authorization header should be formatted exactly as Bearer <api-key>. If that looks right, check whether your OpenRouter account has access to the Perplexity model you selected, and watch for rate limits when the workflow fires many section requests at once.
If you self-host n8n, there’s no hard execution cap (it mainly depends on your server and API limits). On n8n Cloud, your monthly execution limit depends on plan, and this workflow usually consumes multiple executions per article because it splits research into sections.
Usually, yes, because the “multi-agent + merge outputs + structured parsing” part gets complicated fast in simpler builders. n8n handles branching and assembling results without turning your scenario into a fragile maze. You also get the option to self-host, which matters when you run lots of drafts or need tighter control over data. Zapier or Make can still work if all you want is “topic in, draft out” with minimal logic, but you’ll hit limits when you start adding section-level research or strict formatting rules. If you want a second opinion before you commit, Talk to an automation expert.
Once this is running, your team stops paying the “citation tax” on every single article. The workflow handles the repetitive research-and-assemble work so your review time goes where it should: making the piece sharper.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.