LinkedIn to Google Gemini, clean resumes ready to use
Copying a LinkedIn profile into a document sounds quick. Then you hit broken formatting, missing dates, weird line breaks, and “skills” sprinkled across three different sections.
Recruiters feel it when they’re trying to screen faster. HR managers feel it when they need consistency across candidates. And if you run a small talent team, LinkedIn resume automation is the difference between a clean pipeline and a messy one.
This workflow takes one LinkedIn URL, pulls the profile reliably, and turns it into a standards-friendly JSON Resume you can actually use. You’ll see how it works, what you need, and what you can customize.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: LinkedIn to Google Gemini, clean resumes ready to use
flowchart LR
subgraph sg0["When clicking ‘Test workflow’ Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Test workflow’", pos: "b", h: 48 }
n1@{ icon: "mdi:swap-vertical", form: "rounded", label: "Set URL and Bright Data Zone", pos: "b", h: 48 }
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Perform Bright Data Web Requ.."]
n3@{ icon: "mdi:robot", form: "rounded", label: "Markdown to Textual Data Ext..", pos: "b", h: 48 }
n4@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model for..", pos: "b", h: 48 }
n5@{ icon: "mdi:robot", form: "rounded", label: "Skill Extractor", pos: "b", h: 48 }
n6@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model for..", pos: "b", h: 48 }
n7@{ icon: "mdi:code-braces", form: "rounded", label: "Create a binary data for Str..", pos: "b", h: 48 }
n8@{ icon: "mdi:cog", form: "rounded", label: "Write the structured content..", pos: "b", h: 48 }
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Initiate a Webhook Notificat.."]
n10@{ icon: "mdi:cog", form: "rounded", label: "Write the structured skills ..", pos: "b", h: 48 }
n11@{ icon: "mdi:code-braces", form: "rounded", label: "Create a binary data for Str..", pos: "b", h: 48 }
n12@{ icon: "mdi:robot", form: "rounded", label: "JSON Resume Extractor", pos: "b", h: 48 }
n13@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model", pos: "b", h: 48 }
n14["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Code"]
n14 --> n9
n14 --> n7
n5 --> n11
n12 --> n14
n13 -.-> n12
n1 --> n2
n2 --> n3
n0 --> n1
n3 --> n5
n3 --> n12
n6 -.-> n5
n7 --> n8
n4 -.-> n3
n11 --> n10
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n3,n5,n12 ai
class n4,n6,n13 aiModel
class n2,n9 api
class n7,n11,n14 code
classDef customIcon fill:none,stroke:none
class n2,n9,n14 customIcon
The Problem: LinkedIn Profiles Aren’t Resume-Ready
LinkedIn is great for humans and frustrating for systems. The information you need is there, but it’s spread across “About,” experience entries, featured links, and skills that don’t map neatly to your ATS fields. So you end up doing the same dance: copy, paste, clean, reformat, and still miss something important. Multiply that by a hiring sprint, and you’re burning hours on admin work instead of candidate evaluation. Worse, manual cleanup creates inconsistency, which quietly breaks downstream reporting and scoring.
It adds up fast. Here’s where the friction usually shows up.
- Profiles copy over as messy blocks of text, so you waste time reformatting before anyone can review them.
- Anti-bot and CAPTCHA issues make “quick scraping” unreliable, which means the process collapses right when volume spikes.
- Skills and job history end up interpreted differently by each person, so “consistent screening” becomes a nice idea, not reality.
- You can’t easily pass a cleaned profile into other tools because it’s not structured data, it’s a patchwork document.
The Solution: LinkedIn URL → JSON Resume (Automatically)
This n8n workflow turns a LinkedIn profile URL into a clean, structured JSON Resume using Bright Data for the fetch and Google Gemini for the extraction. You start by supplying the LinkedIn URL plus a couple of required settings (like your Bright Data zone and where you want the output sent). Bright Data’s Web Unlocker pulls the profile content without the usual scraping headaches. Then Gemini converts the raw page content into clean text, extracts skills, and builds a JSON Resume that follows a consistent schema. Finally, the workflow sends a webhook notification and also saves the resume (and skills) as files so you can reuse them anywhere.
The workflow kicks off from a manual trigger, which is perfect for testing or running on-demand for specific candidates. From there it gathers the profile data, uses Gemini to transform it into structured fields, and outputs a ready-to-store JSON resume plus a separate skills artifact. No copy-paste. No “did we capture the dates?” anxiety.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you screen 20 candidates from LinkedIn in a day. Manually, if you spend about 15 minutes per profile to copy, clean, and normalize sections, that’s roughly 5 hours of pure formatting work. With this workflow, you drop in a LinkedIn URL, wait for Bright Data + Gemini to produce the JSON Resume, and you’re done; call it about 5–10 minutes per candidate including processing. That’s several hours back on any day you’re doing real volume.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Bright Data Web Unlocker for CAPTCHA-free LinkedIn scraping
- Google Gemini API to extract structured resume fields
- Bright Data Web Unlocker token (get it from your Bright Data zone settings)
Skill level: Intermediate. You will connect credentials, paste API keys, and edit a few “Set” fields like the profile URL and webhook destination.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
You provide the LinkedIn URL and settings. A manual trigger starts the run, then a setup step assigns the target profile URL, your Bright Data zone, and the webhook destination for the finished resume.
The profile is fetched through Bright Data. An HTTP request pulls the profile content via Web Unlocker, which is built to handle bot protections that break typical scrapers.
Gemini cleans and extracts the information. The workflow converts the pulled content into usable text, then Gemini-based extractors produce two structured outputs: a skill set and a JSON Resume that maps cleanly to consistent fields.
Results are saved and sent onward. n8n builds a file payload, writes the JSON resume and skills to disk, and sends a webhook notification so other systems can ingest it right away.
You can easily modify the output destination to push into an ATS or CRM instead of writing files locally. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
This workflow starts manually, so you can run the resume generation on demand.
- Add and keep Manual Launch Trigger as the first node in the canvas.
- Connect Manual Launch Trigger to Assign Source Settings to begin the data flow.
Step 2: Connect External Data Request
Configure source parameters and fetch the profile markdown from the external API.
- In Assign Source Settings, set url to
https://www.linkedin.com/in/[YOUR_ID], zone toweb_unlocker1, and webhook_notification_url tohttps://webhook.site/[YOUR_ID]. - Open External Data Request and set URL to
https://api.brightdata.com/requestand Method toPOST. - Enable Send Body and set body parameters: zone to
{{ $json.zone }}, url to{{ $json.url }}, format toraw, and data_format tomarkdown. - Credential Required: Connect your httpHeaderAuth credentials in External Data Request.
Convert Markdown to Text uses the markdown returned from the request and prepares it for AI extraction.
Step 3: Set Up the Markdown-to-Text AI Processing
Use the Gemini model to clean and normalize the markdown into plain text.
- In Convert Markdown to Text, set Text to
=You need to analyze the below markdown and convert to textual data. Please do not output with your own thoughts. Make sure to output with textual data only with no links, scripts, css etc. {{ $json.data }}. - Set the message to
You are a markdown expertin the messages section. - Open Gemini Markdown Model and keep Model Name as
models/gemini-2.0-flash-exp. - Credential Required: Connect your googlePalmApi credentials in Gemini Markdown Model. This model is connected as the language model for Convert Markdown to Text.
Convert Markdown to Text outputs to both Skill Data Miner and JSON Resume Builder in parallel.
Step 4: Extract Skills and Build JSON Resume (Parallel Branches)
Two extraction paths run simultaneously: one for skill mining and one for a full JSON resume.
- In Skill Data Miner, set Text to
=Perform Data Mining and extract the skills from the provided resume {{ $json.text }}and keep the provided inputSchema JSON for skills. - Open Gemini Skill Model and keep Model Name as
models/gemini-2.0-flash-exp. - Credential Required: Connect your googlePalmApi credentials in Gemini Skill Model. This model is connected as the language model for Skill Data Miner.
- In JSON Resume Builder, set Text to
=Extract the resume in JSON format. {{ $json.text }}and keep the provided inputSchema JSON for the resume. - Open Gemini Resume Model and keep Model Name as
models/gemini-2.0-flash-exp. - Credential Required: Connect your googlePalmApi credentials in Gemini Resume Model. This model is connected as the language model for JSON Resume Builder.
JSON Resume Builder feeds into Transform Output Script, while Skill Data Miner continues to the skills file branch.
Step 5: Build Output Files and Webhook Notice
Format the outputs, write files, and send a webhook notice once the resume JSON is ready.
- In Transform Output Script, set JavaScript Code to
return $input.first().json.output. - Transform Output Script outputs to both Send Webhook Notice and Build Binary Payload in parallel.
- In Send Webhook Notice, set URL to
{{ $('Assign Source Settings').item.json.webhook_notification_url }}and body field json_resume to{{ $('JSON Resume Builder').item.json.output.toJsonString() }}. - In Build Binary Payload, keep the function code that base64-encodes the JSON for file writing.
- In Save Structured File, set File Name to
=d:\Json_Resume.jsonand Operation towrite. - In Build Skill Binary, keep the function code that base64-encodes the skills JSON.
- In Save Skills File, set File Name to
=d:\Resume_Skills.jsonand Operation towrite.
⚠️ Common Pitfall: Ensure the file paths used by Save Structured File and Save Skills File are valid on your n8n host machine, otherwise the write operation will fail.
Step 6: Test and Activate Your Workflow
Run a full test to confirm the resume and skills files are created and the webhook is notified.
- Click Execute Workflow and verify External Data Request returns markdown data.
- Confirm Convert Markdown to Text produces clean text, and both Skill Data Miner and JSON Resume Builder run successfully in parallel.
- Check the filesystem for
d:\Json_Resume.jsonandd:\Resume_Skills.jsoncreated by Save Structured File and Save Skills File. - Verify the webhook target receives the json_resume payload from Send Webhook Notice.
- When results are correct, switch the workflow to Active to use it in production.
Common Gotchas
- Bright Data credentials can expire or need specific permissions. If things break, check your Web Unlocker token and zone access in Bright Data first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if you already have your Bright Data and Gemini keys.
No. You’ll mostly paste credentials and edit a few fields like the LinkedIn URL and webhook destination.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in Bright Data and Google Gemini API usage costs.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s a practical upgrade. Add a translation LLM step right after the “Convert Markdown to Text” stage, then feed the translated text into the skill extractor and the JSON Resume builder. You can also adjust the resume schema mapping if you want localized field names, but most teams keep the JSON keys stable and translate only the content. If you prefer OpenAI, you can swap the Gemini model used by the extraction steps for an OpenAI Chat Model node.
Usually it’s an expired token or the wrong Bright Data zone. Regenerate the Web Unlocker token in Bright Data, then update the Header Authentication credential in n8n. If it still fails, check whether your plan allows Web Unlocker requests and whether LinkedIn is returning a blocked response for that specific URL.
It depends on your n8n plan and how fast Bright Data and Gemini respond, but most teams comfortably process dozens of profiles per day with this setup.
Often, yes, because this is not a simple “A to B” zap. You’re doing scraping, cleanup, LLM extraction, file creation, and webhook delivery, which is where n8n’s flexibility really helps. Self-hosting is also a big deal if you plan to run high volume without counting every task. Zapier or Make can still work if you keep it small and you don’t need advanced branching or file handling. Frankly, the bigger question is governance: where do you want the data to live, and who needs access? Talk to an automation expert if you want help deciding.
Once this is running, you stop “rewriting LinkedIn into a resume” and start collecting clean candidate data on demand. Set it up once, and let the workflow do the boring part.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.