RSS to Notion, summarized news with OpenAI
Keeping up with tech news sounds easy until you’re drowning in tabs, half-read newsletters, and “I’ll save this for later” links you never open again. Then you still have to write a summary, paste it somewhere, and hope you don’t save the same story twice.
This hits marketing leads who need fresh angles for campaigns, but founders tracking competitors and analysts doing market research feel it too. With RSS Notion automation, you get clean, searchable briefs in one place, without doing the copy-paste dance.
This workflow pulls stories from The Verge and TechCrunch, summarizes them with OpenAI, and stores everything in Notion. You’ll see what it fixes, what you need, and how the flow works before you touch anything.
How This Automation Works
See how this solves the problem:
n8n Workflow Template: RSS to Notion, summarized news with OpenAI
flowchart LR
subgraph sg0["Schedule Flow"]
direction LR
n0@{ icon: "mdi:cog", form: "rounded", label: "fetch finished", pos: "b", h: 48 }
n1@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model1", pos: "b", h: 48 }
n2@{ icon: "mdi:robot", form: "rounded", label: "Basic LLM Chain1", pos: "b", h: 48 }
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/notion.dark.svg' width='40' height='40' /></div><br/>Create a database page1"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code in JavaScript1"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/html.dark.svg' width='40' height='40' /></div><br/>HTML1"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request1"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/notion.dark.svg' width='40' height='40' /></div><br/>Get many database pages1"]
n8@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If1", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items1", pos: "b", h: 48 }
n10@{ icon: "mdi:brain", form: "rounded", label: "OpenAI Chat Model", pos: "b", h: 48 }
n11@{ icon: "mdi:robot", form: "rounded", label: "Basic LLM Chain", pos: "b", h: 48 }
n12@{ icon: "mdi:swap-vertical", form: "rounded", label: "Loop Over Items", pos: "b", h: 48 }
n13["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code in JavaScript"]
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/html.dark.svg' width='40' height='40' /></div><br/>HTML"]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request"]
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/notion.dark.svg' width='40' height='40' /></div><br/>Get many database pages"]
n17@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n18@{ icon: "mdi:cog", form: "rounded", label: "Crypto1", pos: "b", h: 48 }
n19@{ icon: "mdi:cog", form: "rounded", label: "Crypto", pos: "b", h: 48 }
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/notion.dark.svg' width='40' height='40' /></div><br/>Create a database page"]
n21@{ icon: "mdi:cog", form: "rounded", label: "TechCrunch", pos: "b", h: 48 }
n22@{ icon: "mdi:cog", form: "rounded", label: "The Verge", pos: "b", h: 48 }
n23@{ icon: "mdi:play-circle", form: "rounded", label: "Schedule Trigger", pos: "b", h: 48 }
n24@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n17 --> n12
n17 --> n15
n8 --> n9
n8 --> n6
n14 --> n13
n5 --> n4
n19 --> n12
n18 --> n9
n22 --> n19
n21 --> n18
n15 --> n14
n6 --> n5
n11 --> n20
n12 --> n0
n12 --> n16
n2 --> n3
n9 --> n0
n9 --> n7
n23 --> n21
n23 --> n22
n10 -.-> n11
n13 --> n11
n1 -.-> n2
n4 --> n2
n20 --> n12
n3 --> n9
n16 --> n17
n7 --> n8
n24 --> n22
n24 --> n21
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n23,n24 trigger
class n2,n11 ai
class n1,n10 aiModel
class n8,n17 decision
class n3,n7,n16,n20 database
class n6,n15 api
class n4,n13 code
class n23 disabled
classDef customIcon fill:none,stroke:none
class n3,n4,n5,n6,n7,n13,n14,n15,n16,n20 customIcon
The Challenge: Turning Daily News Into Usable Research
RSS is great at delivering volume. It’s terrible at delivering clarity. You skim headlines, open a few posts, then lose the thread because the details are spread across a dozen articles and none of them are summarized in your words. If you do save them, you end up with a messy Notion page full of raw links and copy-pasted chunks that aren’t searchable in any meaningful way. And duplicates creep in quietly, so later you’re re-reading the same announcement you already captured last week.
It adds up fast. The friction compounds when you try to make this “a daily habit” and reality hits.
- Manual collecting from TechCrunch and The Verge turns into a daily 30-minute chore that never feels “done.”
- Duplicate links sneak into your notes, so your “database” becomes unreliable and annoying to use.
- Summaries are inconsistent because you write them when you’re rushed, which means they’re hard to skim later.
- Full article text is missing, so search inside Notion only finds titles and whatever you pasted.
The Fix: Auto-Summarized RSS Stories Saved to Notion
This workflow takes two high-signal tech feeds (The Verge and TechCrunch) and turns them into a structured Notion knowledge base you’ll actually use. It runs on a schedule (daily at 11 AM, though it’s disabled by default) or manually when you want to test. Each RSS item gets a unique SHA256 hash based on its URL, so duplicates can be caught early without guessing. If the story is new, the workflow fetches the full article page, extracts the body text, cleans it up, and asks OpenAI to produce a concise plain-text summary capped at about 1,500 characters. Finally, it creates a new Notion database page with the title, summary, date, source, URL, the hash, and the full cleaned text for search.
The workflow starts by reading both RSS feeds, then processes stories in batches so it can handle a normal news day without choking. After a Notion duplicate check, it only spends LLM time on genuinely new items. The end result is a tidy Notion row per article, ready for searching, tagging, or turning into a digest.
What Changes: Before vs. After
| What This Eliminates | Impact You’ll See |
|---|---|
|
|
Real-World Impact
Say you review 20 new stories a day across two feeds and you usually spend about 5 minutes per story to open it, skim it, paste the link into Notion, and write a quick summary. That’s roughly 100 minutes a day. With this workflow, you’ll spend maybe 10 minutes scanning the new Notion entries and starring the ones worth sharing, while OpenAI handles the summaries and Notion storage in the background. Over a week, that’s about 7 hours you get back.
Requirements
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Notion for storing briefs in a database.
- OpenAI API to generate plain-text summaries.
- Notion API access (create an internal integration in Notion).
Skill level: Beginner. You’ll connect Notion and OpenAI, then match a few database fields.
Need help implementing this? Talk to an automation expert (free 15-minute consultation).
The Workflow Flow
A scheduled run (or a manual test) starts everything. The workflow can run daily at a set time, and there’s also a manual trigger so you can test without waiting.
The Verge and TechCrunch feeds are pulled, then normalized. Each article URL gets converted into a SHA256 hash, which becomes the “fingerprint” used to detect duplicates quickly.
Only new articles move forward. For each item, the workflow checks your Notion database to see if that hash already exists. If it does, the item is skipped with no extra processing cost.
Full content is fetched, cleaned, summarized, and saved. New items trigger an HTTP fetch of the article page, HTML extraction (using the right CSS selectors per site), cleanup in JavaScript, then an OpenAI summary. A Notion page is created with the structured fields plus full cleaned text.
You can easily modify the RSS sources to include more sites based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Schedule Trigger
Set up both the scheduled and manual triggers that kick off the RSS ingestion.
- Open Scheduled Automation Trigger and set the schedule rule to run at
11(hour). Note the node is currently disabled—enable it when ready for production. - Confirm Manual Execution Start is present for on-demand testing.
- Verify parallel execution: Scheduled Automation Trigger outputs to both Read TechCrunch Feed and Read Verge Feed in parallel.
- Verify the same parallel behavior for manual runs: Manual Execution Start outputs to both Read Verge Feed and Read TechCrunch Feed in parallel.
Step 2: Connect RSS Sources and Generate Article Hashes
Configure the RSS feeds and create a unique hash for deduplication.
- In Read TechCrunch Feed, set URL to
https://techcrunch.com/feed/. - In Read Verge Feed, set URL to
https://www.theverge.com/rss/index.xml. - In Generate Hash B, set Type to
SHA256, Value to{{ $json.link }}, and Data Property Name tohash. - In Generate Hash A, set Type to
SHA256, Value to{{ $json.link }}, and Data Property Name tohash.
Step 3: Check Notion for Existing Entries
Use Notion queries to prevent duplicate entries based on the hash.
- Open Query Notion Pages A and select the Notion database for Database ID (currently
[YOUR_ID]). Credential Required: Connect your notionApi credentials. - In Query Notion Pages A, confirm the filter condition uses Hash|rich_text equals
= {{ $json.hash }}. - Open Query Notion Pages B and select the Notion database for Database ID (currently
[YOUR_ID]). Credential Required: Connect your notionApi credentials. - In Query Notion Pages B, confirm the filter condition uses Hash|rich_text equals
= {{ $json.hash }}. - Check Check Existing Entry A and Check Existing Entry B use the expression
{{ $item("0").$node["Query Notion Pages A"].json["id"] }}and{{ $item("0").$node["Query Notion Pages B"].json["id"] }}to detect duplicates.
Step 4: Fetch, Extract, and Assemble Full Article Text
Retrieve each article page and clean its HTML into a full-text field used for summarization.
- In Fetch Article Page A, set URL to
{{ $('Generate Hash A').item.json.link }}. - In Extract HTML Body A, set Operation to
extractHtmlContentand the CSS selector to.duet--article--article-body-component pwith Return Array enabled. - In Assemble Full Text A, keep the JavaScript Code that filters empty paragraphs and joins content into
fullArticle. - In Fetch Article Page B, set URL to
{{ $('Generate Hash B').item.json.link }}. - In Extract HTML Body B, set Operation to
extractHtmlContentand the CSS selector to.entry-content pwith Return Array enabled. - In Assemble Full Text B, keep the JavaScript Code that builds the
fullArticlefield from extracted paragraphs.
Step 5: Set Up AI Summarization
Use GPT to summarize each article before storing it in Notion.
- Open Summarize Article A and confirm Text is set to
{{ $json.fullArticle }}with the custom prompt already defined. - Ensure LLM Chat Engine A is connected as the language model for Summarize Article A. Credential Required: Connect your openAiApi credentials in LLM Chat Engine A.
- Open Summarize Article B and confirm Text is set to
{{ $json.fullArticle }}with the same summarization prompt. - Ensure LLM Chat Engine B is connected as the language model for Summarize Article B. Credential Required: Connect your openAiApi credentials in LLM Chat Engine B.
Step 6: Configure Notion Output
Create Notion database pages for each summarized article and loop through items.
- Open Create Notion Entry A and select the Notion database for Database ID. Credential Required: Connect your notionApi credentials.
- In Create Notion Entry A, set Title to
{{ $('Generate Hash A').item.json.title }}and map properties like Summary, Date, Hash, URL, Digest Status, source, and Full Article using the existing expressions. - Open Create Notion Entry B and select the Notion database for Database ID. Credential Required: Connect your notionApi credentials.
- In Create Notion Entry B, set Title to
{{ $('Generate Hash B').item.json.title }}and map properties such as Summary, Date, Hash, URL, Digest Status, source, and Full Article using the existing expressions. - Confirm Create Notion Entry A → Iterate Articles A and Create Notion Entry B → Iterate Articles B to continue batch processing until Fetch Completed.
Step 7: Test & Activate Your Workflow
Run a manual test, verify data in Notion, and then enable the scheduled automation.
- Click Execute Workflow using Manual Execution Start to run both feeds in parallel.
- Confirm that Query Notion Pages A and Query Notion Pages B either skip existing items or pass new items to article fetching.
- Verify that Summarize Article A and Summarize Article B return summaries and Create Notion Entry A / Create Notion Entry B create pages in your Notion database.
- When results look correct, enable Scheduled Automation Trigger for production runs.
Watch Out For
- Notion credentials can expire or need specific permissions. If things break, check your Notion integration access and the database sharing settings first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Common Questions
About 30 minutes if your Notion database is ready.
Yes. You’ll mostly be connecting accounts and mapping a few Notion properties.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs (often just a few cents per day for typical RSS volumes).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
You can swap the RSS sources by adding another “RSS Read” node and reusing the same hash, Notion duplicate check, fetch, and summarize path. If the new site has different HTML structure, update the HTML extraction selectors in the “Extract HTML Body” node for that branch. Common tweaks include changing the summary style (more bullet-like, more opinion-free), adding tags in Notion based on keywords, or storing only the summary when you don’t need full text.
Usually it’s the database not being shared with your Notion integration, or an expired token. Re-check the integration permissions in Notion, then confirm the correct workspace and database are selected inside n8n.
On self-hosted n8n, there’s no fixed execution cap (it depends on your server). On n8n Cloud, capacity depends on your plan, but this workflow is typically fine for daily RSS runs because it only summarizes new items after the Notion duplicate check. In practice, processing a day’s worth of stories usually finishes in minutes, not hours. If you add many more feeds, expect to tune batching and watch OpenAI rate limits.
Often, yes. This workflow relies on branching logic, HTML extraction, JavaScript cleanup, and a “check then process” pattern to prevent duplicates, and that tends to get clunky (and pricey) in tools that charge per task. n8n also lets you self-host, which is a big deal once you start running daily research automations across multiple feeds. Zapier or Make can be totally fine for a simpler “RSS to Notion” link-saver, but you’ll miss the full-text extraction and consistent summarization unless you add more moving parts. If you want help deciding, Talk to an automation expert and we’ll map it to your setup.
Once this is running, your “news intake” stops being a bunch of tabs and becomes a searchable library. The workflow handles the repetitive stuff, so you can focus on decisions and ideas.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.