LinkedIn + Google Gemini: company stories, consistent
Copying details from LinkedIn into a doc, then rewriting it into something “publishable,” sounds simple. In reality, it turns into tab-switching, half-finished drafts, and summaries that somehow never match the last one you wrote.
This hits marketing managers who need quick company narratives, but sales teams doing account research feel it too. If you want LinkedIn story automation that produces the same structured output every time, this workflow is built for that.
You’ll see how it pulls a LinkedIn company page through Bright Data, turns the page into a clean set of facts, and has Google Gemini write a ready-to-edit story you can route to Slack, Sheets, or anywhere with a webhook.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: LinkedIn + Google Gemini: company stories, consistent
flowchart LR
subgraph sg0["When clicking ‘Test workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Test workflow’", pos: "b", h: 48 }
n1@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model", pos: "b", h: 48 }
n2@{ icon: "mdi:robot", form: "rounded", label: "Default Data Loader", pos: "b", h: 48 }
n3@{ icon: "mdi:robot", form: "rounded", label: "Recursive Character Text Spl..", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set Snapshot Id", pos: "b", h: 48 }
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Download Snapshot"]
n7@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set LinkedIn URL", pos: "b", h: 48 }
n8@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model1", pos: "b", h: 48 }
n9@{ icon: "mdi:swap-horizontal", form: "rounded", label: "Check on the errors", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Perform LinkedIn Web Request"]
n11["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Check Snapshot Status"]
n12@{ icon: "mdi:robot", form: "rounded", label: "LinkedIn Data Extractor", pos: "b", h: 48 }
n13@{ icon: "mdi:robot", form: "rounded", label: "Concise Summary Generator", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Webhook Notifier for Data Ex.."]
n15["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Webhook Notifier for Summary.."]
n16@{ icon: "mdi:cog", form: "rounded", label: "Wait for 30 seconds", pos: "b", h: 48 }
n4 --> n9
n4 --> n16
n5 --> n11
n7 --> n10
n6 --> n12
n9 --> n6
n2 -.-> n13
n16 --> n11
n11 --> n4
n12 --> n13
n12 --> n14
n1 -.-> n13
n13 --> n15
n8 -.-> n12
n10 --> n5
n3 -.-> n2
n0 --> n7
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n2,n3,n12,n13 ai
class n1,n8 aiModel
class n4,n9 decision
class n6,n10,n11,n14,n15 api
classDef customIcon fill:none,stroke:none
class n6,n10,n11,n14,n15 customIcon
The Problem: LinkedIn research is slow and inconsistent
If you’ve ever tried to write “quick” company blurbs from LinkedIn, you know the trap. You open a company page, scan the About section, try to pull out what matters, then rewrite it in your own words so it doesn’t read like a paste. Next company, you do it again, but the structure changes, the tone shifts, and now the output is impossible to compare across accounts. Worse, a small miss (wrong HQ, outdated positioning, confusing parent/child brands) quietly spreads into proposals, campaigns, and CRM notes.
It adds up fast. Here’s where it breaks down once volume shows up.
- You spend about 20 minutes per company just locating, scanning, and copying the right details.
- Two people summarize the same company and you end up with two different “truths,” which means extra review cycles.
- Manual notes invite small errors, and those errors tend to survive all the way to client-facing assets.
- When leadership asks for “20 target accounts by tomorrow,” you either rush it or you don’t sleep.
The Solution: Turn LinkedIn pages into reusable company stories
This n8n workflow takes a LinkedIn company URL (or company name you map to a URL), scrapes the page through Bright Data’s Web Unlocker, extracts the key company facts from the returned HTML, and then asks Google Gemini to turn those facts into a polished narrative. The “polished” part matters, because it’s not just a raw summary. You can shape it into a consistent format your team recognizes, so every company story comes back with the same sections and voice. Finally, the workflow sends results out via webhook notifications, which makes it easy to push into Slack, Google Sheets, or a downstream system you already use.
The workflow starts with a manual trigger and a LinkedIn URL. It then runs a snapshot-style retrieval loop (with a short wait) until the scraped content is ready, extracts structured company info, generates the story with Gemini, and sends both the extracted data and the final narrative to your chosen endpoints.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say your team needs stories for 15 target accounts each week. Manually, budget about 20 minutes per company to read LinkedIn, pull details, and rewrite a usable narrative, which comes out to roughly 5 hours. With this workflow, you trigger each run in under a minute, then wait for scraping and generation in the background (often a few minutes per company). You still review the final story, but the “blank page” work is gone, and you typically get most of that afternoon back.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data to scrape LinkedIn via Web Unlocker
- Google Gemini API to generate the company story
- Bright Data Web Unlocker token (get it from Bright Data zone settings)
Skill level: Intermediate. You’ll connect credentials, paste tokens, and adjust one or two nodes like the LinkedIn URL and webhook destination.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You trigger it with a LinkedIn company URL. In the workflow, a manual launch starts the run and a “Set LinkedIn URL” style node defines which company page you want to process.
Bright Data retrieves the page and the workflow waits until it’s ready. The workflow sends a request to Bright Data, assigns a snapshot identifier, then checks status in a short loop. If the snapshot isn’t ready, it waits about 30 seconds and checks again.
The HTML gets converted into structured company facts. Once the snapshot file is retrieved, an information extractor pulls out the details you actually care about (the parts you usually hunt for by eye).
Google Gemini writes the narrative and the workflow sends it where you want. A summarization chain produces the story, then webhook notifications deliver both the extracted data and the final summary to your endpoint (Slack, Sheets, CRM, or a custom app).
You can easily modify the input method to accept a Jotform submission instead of a manual trigger based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
This workflow starts manually so you can test the end-to-end story generation on demand.
- Add the Manual Launch Trigger node as the workflow trigger.
- Connect Manual Launch Trigger to Assign LinkedIn Address.
Step 2: Connect Bright Data and Set the LinkedIn Target
This step launches the Bright Data snapshot creation for a specific LinkedIn company page and tracks its snapshot ID.
- In Assign LinkedIn Address, set url to
https://il.linkedin.com/company/bright-data(or your target company URL). - In Run LinkedIn Web Request, set URL to
https://api.brightdata.com/datasets/v3/trigger. - In Run LinkedIn Web Request, set JSON Body to
=[{"url": "{{ $json.url }}"}]. - In Run LinkedIn Web Request, set query parameters: dataset_id to
gd_l1vikfnt1wgvvqz95wand include_errors totrue. - Credential Required: Connect your httpHeaderAuth credentials in Run LinkedIn Web Request.
- In Assign Snapshot Identifier, map snapshot_id to
{{ $json.snapshot_id }}.
⚠️ Common Pitfall: If the Bright Data API credentials are missing or invalid, the snapshot will never start and subsequent status checks will loop indefinitely.
Step 3: Configure Snapshot Status Polling and Retrieval
This section loops until Bright Data reports the snapshot is ready, then downloads the data.
- In Verify Snapshot Status, set URL to
=https://api.brightdata.com/datasets/v3/progress/{{ $json.snapshot_id }}. - Credential Required: Connect your httpHeaderAuth credentials in Verify Snapshot Status.
- In Status Ready Check, set the condition to check
{{ $('Verify Snapshot Status').item.json.status }}equalsready. - In Delay 30 Seconds, set amount to
30to throttle the polling loop. - In Validate Error Count, set the condition to check
{{ $json.errors.toString() }}equals0. - In Retrieve Snapshot File, set URL to
=https://api.brightdata.com/datasets/v3/snapshot/{{ $json.snapshot_id }}and add query parameter format =json. - Credential Required: Connect your httpHeaderAuth credentials in Retrieve Snapshot File.
Flow Note: Verify Snapshot Status → Status Ready Check loops via Delay 30 Seconds until ready, then Status Ready Check → Validate Error Count → Retrieve Snapshot File.
Step 4: Set Up AI Extraction and Summarization
These nodes turn the retrieved data into a structured company story and a concise summary using Gemini models.
- In LinkedIn Info Extractor, set Text to
=Write a complete story of the provided company information in JSON... Here's the Company Info in JSON - {{ $json.input }}. - In LinkedIn Info Extractor, keep the required attribute company_story with description
Detailed Company Info. - Ensure Gemini Chat Engine B is connected as the language model for LinkedIn Info Extractor.
- Credential Required: Connect your googlePalmApi credentials in Gemini Chat Engine B.
- In Brief Summary Builder, keep Operation Mode set to
documentLoader. - In Brief Summary Builder, set the prompt to
=Write a concise summary of the following: {{ $json.output.company_story }}and the combine prompt to=Write a concise summary of the following: CONCISE SUMMARY: {{ $json.output.company_story }}. - Ensure Gemini Chat Engine is connected as the language model for Brief Summary Builder.
- Credential Required: Connect your googlePalmApi credentials in Gemini Chat Engine.
- Keep Recursive Text Splitter set to chunkOverlap
100and ensure it feeds Standard Data Loader, which connects to Brief Summary Builder via ai_document.
Flow Note: LinkedIn Info Extractor outputs to both Brief Summary Builder and Notify Extractor Webhook in parallel.
Step 5: Configure Webhook Outputs
Send the extracted story and summary to your external systems or testing endpoints.
- In Notify Extractor Webhook, set URL to
https://webhook.site/[YOUR_ID]and map response to{{ $json.output }}. - In Notify Summary Webhook, set URL to
https://webhook.site/[YOUR_ID]and map response to{{ $json.response.text }}.
Tip: Replace webhook.site URLs with your production endpoints once testing is complete.
Step 6: Test and Activate Your Workflow
Run a manual test to validate the Bright Data loop, AI extraction, and webhook outputs.
- Click Execute Workflow and trigger Manual Launch Trigger.
- Confirm Run LinkedIn Web Request returns a snapshot ID and Verify Snapshot Status eventually reports
ready. - Verify Retrieve Snapshot File returns JSON and that LinkedIn Info Extractor produces a
company_story. - Check that Notify Extractor Webhook and Notify Summary Webhook receive payloads.
- Once satisfied, toggle the workflow to Active for production use.
Common Gotchas
- Bright Data credentials can expire or need specific permissions. If things break, check your Web Unlocker zone token and header auth credential in n8n first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes once you have your Bright Data token and Gemini key.
No. You’ll mostly paste credentials and tweak the LinkedIn URL and webhook destination.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data usage and Gemini API costs.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s honestly the best part. You can adjust the prompt in the Gemini Chat Engine or the Brief Summary Builder to force sections like “What they do,” “Who they serve,” and “Why it matters,” or switch the tone to founder-friendly, formal, or bullet-based. To send results to Sheets, add or swap in a Google Sheets node after the summary is generated, using the same fields the webhook already sends. Many teams also expand the LinkedIn Info Extractor to capture extra facts like headquarters or employee count so the story has more substance. Keep the structure consistent first, then iterate on voice.
Usually it’s an invalid or expired Web Unlocker token in your header auth credential. It can also be a zone configuration issue in Bright Data, or your request being blocked because the URL isn’t a proper LinkedIn company page. If the workflow is stuck looping, check the snapshot status responses first to confirm Bright Data is actually generating the file.
On n8n Cloud Starter plan, you can process up to a few thousand executions per month, and higher tiers handle more. If you self-host, there’s no execution cap (it depends on your server). In practice, this workflow runs one company per execution and the wait loop means throughput depends mostly on scraping and API response time, not n8n itself.
Often, yes, because this flow benefits from branching, waiting, and multi-step extraction that gets awkward (and pricey) in simpler tools. n8n is also easier to self-host when you want unlimited runs, and it’s more flexible when you start sending the same output to multiple destinations. Zapier or Make can still be fine if you only need “URL in, summary out” with no readiness checks. If you’re unsure, Talk to an automation expert and you’ll get a straight answer based on volume and complexity.
Once this is running, “write a company story” stops being a task and becomes a button you press. The workflow handles the repetitive parts so you can focus on what to say next.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.