Fathom to Google Docs, repurpose transcripts fast
Your best insights are trapped in meeting recordings. You know there’s good stuff in the transcript, but turning it into a post, an image idea, and a short video prompt usually turns into a “someday” task.
This Fathom Google Docs automation hits hardest for consultants who run sessions all week. Marketing leads feel it too, and educators repurposing workshops run into the same wall. You want usable content quickly, not another admin project.
This workflow pulls your latest Fathom transcript, drafts a ready-to-edit Google Doc, generates image and video prompts, and pings Slack when the video is ready. Below, you’ll see exactly what it does and what you’ll need to run it reliably.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: Fathom to Google Docs, repurpose transcripts fast
flowchart LR
subgraph sg0["When chat message received Flow"]
direction LR
n0@{ icon: "mdi:play-circle", form: "rounded", label: "When chat message received", pos: "b", h: 48 }
n1@{ icon: "mdi:brain", form: "rounded", label: "Main GPT-4 Model", pos: "b", h: 48 }
n2@{ icon: "mdi:memory", form: "rounded", label: "Simple Memory", pos: "b", h: 48 }
n3@{ icon: "mdi:wrench", form: "rounded", label: "Text to Video", pos: "b", h: 48 }
n4@{ icon: "mdi:wrench", form: "rounded", label: "Video Generator", pos: "b", h: 48 }
n5@{ icon: "mdi:brain", form: "rounded", label: "Video Generator GPT Model", pos: "b", h: 48 }
n6@{ icon: "mdi:robot", form: "rounded", label: "Content Orchestrator", pos: "b", h: 48 }
n7@{ icon: "mdi:wrench", form: "rounded", label: "Content Post Generator", pos: "b", h: 48 }
n8@{ icon: "mdi:wrench", form: "rounded", label: "Text to Image", pos: "b", h: 48 }
n9@{ icon: "mdi:web", form: "rounded", label: "Get Fathom Transcript", pos: "b", h: 48 }
n10@{ icon: "mdi:wrench", form: "rounded", label: "Image Generator", pos: "b", h: 48 }
n11@{ icon: "mdi:wrench", form: "rounded", label: "Transcript to Content", pos: "b", h: 48 }
n2 -.-> n6
n8 -.-> n10
n3 -.-> n4
n10 -.-> n7
n4 -.-> n7
n1 -.-> n6
n1 -.-> n7
n9 -.-> n6
n11 -.-> n7
n7 -.-> n6
n5 -.-> n4
n5 -.-> n10
n0 --> n6
end
subgraph sg1["Subworkflow Entry Point Flow"]
direction LR
n12@{ icon: "mdi:play-circle", form: "rounded", label: "Subworkflow Entry Point", pos: "b", h: 48 }
n13@{ icon: "mdi:cog", form: "rounded", label: "Create Google Doc", pos: "b", h: 48 }
n14@{ icon: "mdi:cog", form: "rounded", label: "Insert Content into Doc", pos: "b", h: 48 }
n15@{ icon: "mdi:swap-vertical", form: "rounded", label: "Format Image Link", pos: "b", h: 48 }
n16["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Call DALL-E API"]
n17@{ icon: "mdi:cog", form: "rounded", label: "Convert Image to Binary", pos: "b", h: 48 }
n18@{ icon: "mdi:cog", form: "rounded", label: "Upload to Storage", pos: "b", h: 48 }
n19["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Fetch Generated Video"]
n21["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Call Video API"]
n22@{ icon: "mdi:swap-vertical", form: "rounded", label: "Format Video Link", pos: "b", h: 48 }
n21 --> n19
n16 --> n17
n13 --> n14
n18 --> n15
n19 --> n22
n17 --> n18
n12 --> n13
n12 --> n16
n12 --> n21
end
subgraph sg2["Flow 3"]
direction LR
n20["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/slack.svg' width='40' height='40' /></div><br/>Video Ready"]
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0,n12 trigger
class n6 ai
class n1,n5 aiModel
class n3,n4,n7,n8,n10,n11 ai
class n2 ai
class n9,n16,n19,n21 api
classDef customIcon fill:none,stroke:none
class n16,n19,n21,n20 customIcon
The Problem: Great transcripts, zero time to repurpose them
A Fathom transcript is useful, but it’s not “content” yet. First you skim for highlights. Then you try to shape it into a post that doesn’t sound like a meeting. Then come the extras: a video concept, an image idea, and somewhere organized to store it all. Most teams end up copying text into Google Docs, rewriting sections, hunting for quotable moments, and forgetting to share the final assets. The result is inconsistent publishing and a pile of unused recordings.
The friction compounds. Here’s where it usually breaks down.
- Finding the strongest 2-3 moments takes long enough that you push it to next week.
- Manual copy-paste into Google Docs leads to messy formatting and missing context.
- Video and image ideas get written in random places, so you can’t reuse them later.
- Someone has to remember to post the video link to Slack, and honestly that’s where things get dropped.
The Solution: Turn the latest Fathom transcript into a Doc, image, and video link
This workflow starts with a simple chat request in n8n, like “Create content from my latest session.” Once triggered, it fetches your most recent Fathom transcript (focused on the last 7 days), then uses OpenAI to analyze what was said and pull out the key insights and “breakthrough” moments. From that analysis, it drafts written content and automatically creates a Google Doc you can edit, reuse, and share. In parallel, it generates an image prompt, creates a social graphic via DALL·E, uploads it to Google Drive, and assembles a clean shareable link. Finally, it sends a video generation prompt to your video provider (such as Luma or Runway), checks for the finished result, then notifies Slack with the ready video URL.
The workflow begins in chat, then branches into three content tracks: Google Docs for the written draft, DALL·E for an image asset, and a video API for a short clip. When the video is complete, Slack gets the link so you can review and publish without chasing files.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you run 3 client calls a week and want one LinkedIn post, one graphic, and one short video per call. Manually, you might spend about 30 minutes reviewing the transcript, 45 minutes drafting, 20 minutes writing prompts, and another 15 minutes tracking links and sharing updates. That’s roughly 2 hours per call, so about 6 hours a week. With this workflow, you spend maybe 5 minutes sending the chat request and skimming the finished Google Doc, while the video generates in the background (often 2-5 minutes). Your Slack message arrives with the video link when it’s ready.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- Fathom to access your recorded meeting transcripts.
- Google Docs to store editable content drafts.
- Slack to receive the finished video link.
- Google Drive to store and share generated images.
- OpenAI API key (get it from the OpenAI API dashboard).
- Video generation API access (from Luma or Runway provider settings).
Skill level: Intermediate. You’ll connect accounts, add API keys, and confirm the three subworkflows are set up once.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
A chat request kicks things off. You trigger the workflow through the n8n chat interface, asking it to create content from your latest session. That message is routed into an AI “coordinator” that decides which assets to generate.
The transcript gets fetched and understood. The workflow pulls your most recent Fathom transcript (within the last 7 days) and passes it to OpenAI to extract themes, standout moments, and usable quotes. This is the part you normally do while half-reading, half-rewriting.
Three content tracks run in parallel. One track writes a structured draft and creates a Google Doc, then injects the draft so you have something clean to edit. Another track generates an image prompt, requests the image (DALL·E), converts it into a file, and uploads it to Google Drive. The third track creates a video prompt, sends it to your video provider, then retrieves the finished video result.
Slack receives the final “ready” link. Once the video URL is assembled, the workflow posts a notification in Slack so your team can review, schedule, or publish without digging through logs or tabs.
You can easily modify which transcript window you pull (like last 24 hours instead of 7 days) and where assets get posted in Slack based on your needs. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Chat Trigger
This workflow starts when a user sends a chat message into n8n, which is then routed to the AI coordinator.
- Add the Chat Input Trigger node and keep the default options.
- Connect Chat Input Trigger to Content Coordinator to pass user messages into the AI orchestration layer.
Step 2: Connect Fathom Transcript Retrieval
This step ensures the workflow can pull the latest coaching session transcript from Fathom.
- Open Fetch Fathom Transcript and confirm the URL is
https://api.fathom.ai/external/v1/meetings. - Enable Send Query and verify the query parameters include
include_summary=trueandlimit=1. - Ensure Fetch Fathom Transcript is connected as a tool to Content Coordinator.
Step 3: Set Up the AI Coordination Layer
The AI layer coordinates transcript processing and routes to specialized sub-workflows for writing, video, and image creation.
- In Primary GPT Model, select the Model value
gpt-4.1-mini. - Credential Required: Connect your openAiApi credentials in Primary GPT Model.
- Set Context Buffer Context Window Length to
2and connect it to Content Coordinator as memory. - Connect Primary GPT Model as the language model for both Content Coordinator and Post Creator Agent.
- In Visual GPT Model, select Model
gpt-4.1-miniand connect it to Video Prompt Agent and Image Prompt Agent. - Credential Required: Connect your openAiApi credentials in Visual GPT Model.
Step 4: Configure Subworkflow Outputs (Docs, Images, Video)
When the subworkflow starts, it runs three branches in parallel to generate docs, images, and video prompts.
- Ensure Subflow Start Trigger has inputs for post_title and post_content, as used later by Google Docs.
- Subflow Start Trigger outputs to both Generate Google Doc, DALL·E Image Request, and Video Creation Request in parallel.
- In Generate Google Doc, set Title to
{{ $json.post_title }}and Folder ID to[YOUR_ID]. - In Inject Doc Content, set Operation to
updateand Document URL to{{ $json.id }}, with insert text{{ $('Subflow Start Trigger').item.json.post_content }}. - In DALL·E Image Request, set URL to
https://api.openai.com/v1/images/generations, Method toPOST, and prompt to{{ $json.output }}. - In Video Creation Request, set URL to
https://api.kie.ai/api/v1/jobs/createTaskand pass input.prompt as{{ $json.output }}withinput.aspect_ratiolandscape.
Step 5: Assemble Image and Video Links
These nodes convert generated assets into shareable links and prepare them for notifications.
- Connect DALL·E Image Request to Image to Binary File and set Operation to
toBinarywith Source Propertydata[0].b64_json. - In Upload to Drive Storage, set Drive to
My Driveand Folder ID to[YOUR_ID]. - Credential Required: Connect your googleDriveOAuth2Api credentials in Upload to Drive Storage.
- In Assemble Image Link, map the URL with
{{ JSON.parse($json.data.resultJson).resultUrls[0] }}. - Connect Video Creation Request to Retrieve Video Result, then to Assemble Video Link using
{{ JSON.parse($json.data.resultJson).resultUrls[0] }}.
Step 6: Configure the Notification Output
Send the final video link to Slack once the asset is ready.
- Open Notify Video Ready and set Text to
Your video is ready! 🎥 \n\n[Watch your coaching session video]({{ $json.videoURL }}) \n\nThis captures the key breakthrough moment from your session. The video shows the problem, the solution, and the impact - all in about 60 seconds.. - Set Select to
channeland choose the Channel value. - Credential Required: Connect your slackOAuth2Api credentials in Notify Video Ready.
{{ JSON.parse($json.data.resultJson).resultUrls[0] }}.Step 7: Test and Activate Your Workflow
Validate the full pipeline before enabling the workflow in production.
- Use Chat Input Trigger to send a test message like “Create content from my latest coaching session with video and image.”
- Confirm that Content Coordinator calls Fetch Fathom Transcript, then routes to Post Creator Agent and starts Subflow Start Trigger.
- Verify that Generate Google Doc creates a document and Inject Doc Content inserts the transcript-based content.
- Check that DALL·E Image Request → Image to Binary File → Upload to Drive Storage completes and that Assemble Image Link contains a valid URL.
- Confirm that Video Creation Request → Retrieve Video Result → Assemble Video Link produces a video URL and that Notify Video Ready posts it to Slack.
- When the run succeeds, toggle the workflow to Active for production use.
Common Gotchas
- Google Docs and Google Drive credentials can expire or need specific permissions. If things break, check the n8n credential test and Google account access scopes first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 45 minutes if your accounts and subworkflows are ready.
No. You’ll mostly connect accounts, paste API keys, and test a few runs.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI usage (often a few dollars for several long transcripts) plus video generation costs (commonly $0.50-$2.00 per video).
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and it’s a quick change. Update the Slack “Notify Video Ready” message destination, then adjust the text to include what your team cares about (Doc link, image link, and the assembled video URL). Common tweaks include posting to a client-specific channel, tagging an owner, or sending only when the transcript contains certain keywords.
Usually it’s an expired token or missing permissions on the Fathom side. Reconnect the Fathom credential in n8n and rerun a test message, then confirm the transcript fetch is actually returning sessions from the last 7 days. If you have no recent recordings, the workflow can look “broken” when it’s really just empty input. Rate limits can also show up if you hammer it with lots of test runs back-to-back.
On n8n Cloud Starter, you can run about 2,500 executions per month, and higher plans handle more. If you self-host, there’s no execution cap (it depends on your server and API limits). Practically, most teams run this a few times a day, and the bigger constraint is OpenAI token usage plus the video provider queue time. The workflow’s transcript window is the most recent 7 days, so it’s optimized for “latest session” use, not bulk backfills.
Often, yes, because this isn’t a simple “send transcript to Doc” zap. You’ve got branching logic, multiple AI calls, file handling (image to binary to Drive), plus polling a video API result, and n8n is built for that kind of flow without turning into a fragile chain. The other advantage is control: self-hosting gives you unlimited executions, which matters once you run it after every meeting. Zapier or Make can still work if you strip it down to one output (just the Doc), but the multimodal parts get expensive and fiddly. If you want a quick recommendation based on your volume and tools, Talk to an automation expert.
Once this is running, every recorded session can turn into a draft, a visual, and a video concept without you babysitting the process. Set it up, run it after calls, and keep the momentum.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.