GPT-4 + Firebase: illustrated stories, stored and ready
Your story process probably breaks in the same place every time: you can write the narrative, but turning it into consistent scenes, prompts, and stored assets becomes a messy pile of drafts, links, and “final_v7” files.
Content marketers feel it when campaigns need “one more version.” Product teams hit it when they need quick explainers. And an agency owner feels it when a client asks for illustrated stories in three languages. This GPT-4 story automation turns a single brief into a complete illustrated story, then saves it cleanly in Firebase so you can publish and reuse without hunting for assets.
You’ll see what the workflow does, what you need to run it, and how the pieces fit together so it’s easy to adapt to your brand and content pipeline.
How This Automation Works
Here’s the complete workflow you’ll be setting up:
n8n Workflow Template: GPT-4 + Firebase: illustrated stories, stored and ready
flowchart LR
subgraph sg0["Story Input Form Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/form.svg' width='40' height='40' /></div><br/>Story Input Form"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Validate Input"]
n2["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Generate Story (GPT-4)"]
n3["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Prepare Story Data"]
n4["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Generate Images (DALL-E 3)"]
n5["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Upload to Firebase Storage"]
n6["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Finalize Story"]
n7["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/code.svg' width='40' height='40' /></div><br/>Prepare Firestore Data"]
n8["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>Save to Firestore"]
n9["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/webhook.dark.svg' width='40' height='40' /></div><br/>Return Response"]
n6 --> n7
n1 --> n2
n0 --> n1
n8 --> n9
n3 --> n4
n2 --> n3
n7 --> n8
n4 --> n5
n5 --> n6
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n0 trigger
class n2,n8,n9 api
class n1,n3,n4,n5,n6,n7 code
classDef customIcon fill:none,stroke:none
class n0,n1,n2,n3,n4,n5,n6,n7,n8,n9 customIcon
Why This Matters: Illustrated Stories Get Chaotic Fast
Writing a story is the “fun” part. The drag starts right after: splitting it into scenes, keeping character details consistent, creating image prompts that actually work, generating images, uploading them somewhere reliable, then organizing everything so you can publish later. If you do this manually, you end up with a Google Doc, a folder of images with random names, a handful of temporary links that expire, and a lot of back-and-forth to fix continuity. Multiply that by multiple languages or styles and you’re spending your best creative energy on admin work.
It adds up fast. Here’s where it usually breaks down.
- Scene-by-scene image generation becomes a repetitive loop where you copy prompts, download files, rename them, and upload them again.
- Links get lost or change, which means your “published” story can quietly end up with missing images a week later.
- Without a structured database record, versioning is painful, so you avoid iterating even when you should.
- Teams can’t reuse successful story formats because the assets and metadata aren’t stored in a consistent way.
What You’ll Build: A One-Brief Illustrated Story Pipeline
This workflow starts with a simple story form where you define the topic, language, art style, and how many scenes you want. It validates the submission so you don’t get half-filled requests, then asks GPT-4 to generate a complete narrative broken into scenes with characters and image prompts for each moment. Next, it renders a custom DALL-E 3 illustration for every scene, uploads those images to Firebase Storage so the links stay stable, and assembles everything into a single “final story” payload. Finally, it writes a structured document into Firestore, which makes searching, reusing, and publishing dramatically easier. You get a clean JSON response back with the full story and image URLs, ready for a website, app, newsletter, or content library.
The workflow begins at the form submission. GPT-4 creates the story structure and scene prompts, then a rendering loop generates images and pushes them to Firebase Storage. Firestore becomes the source of truth, and the webhook response gives you the finished package immediately.
What You’re Building
| What Gets Automated | What You’ll Achieve |
|---|---|
|
|
Expected Results
Say you publish three illustrated stories a week and each story has 8 scenes. Manually, you might spend about 10 minutes per scene just on image prompting, downloading, renaming, and re-uploading, which is roughly 80 minutes per story before you even format anything. With this workflow, the “work” becomes one form submission that takes about 5 minutes, then you wait for generation and uploads to finish. You still review and polish, but you’re no longer doing 20+ repetitive micro-steps per story.
Before You Start
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- OpenAI API for GPT-4 and DALL-E 3 generation.
- Firebase (Firestore + Storage) to store story data and images.
- Google Service Account credentials (create in Google Cloud Console for your Firebase project)
Skill level: Intermediate. You don’t need “coding skills,” but you will paste credentials, edit a few variables in code nodes, and test a webhook response.
Want someone to build this for you? Talk to an automation expert (free 15-minute consultation).
Step by Step
A story brief is submitted through a form. The form collects the topic, language, art style, and the number of scenes (from 1 up to 12). That single submission becomes your trigger.
The request is checked and cleaned up. A validation step makes sure required fields exist and are formatted correctly, so GPT-4 isn’t guessing what you meant. Small detail, big difference.
GPT-4 creates the narrative structure. The workflow calls GPT-4 via an HTTP request, asking for a complete story broken into scenes, with character details and image prompts that match each scene’s moment.
Images are rendered and uploaded. The workflow generates scene illustrations (DALL-E 3), then uploads them to Firebase Storage and assembles a final payload that includes stable image URLs for each scene.
Everything is stored and returned. The story metadata and scene data are written into Firestore, and the workflow replies with JSON containing the story plus image links so you can publish immediately or feed another workflow.
You can easily modify the language list or art styles to match your brand, then store extra fields (like “campaign name” or “client”) in the Firestore document. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Form Trigger
Set up the form that starts the workflow so users can submit story details.
- Add the Story Form Intake node as your trigger.
- Set Form Title to
AI Story Generator. - Set Form Description to
Create an illustrated story with AI. Fill in the details and get a complete story with DALL-E 3 images for each scene. Generation takes 2-5 minutes. - Configure the form fields exactly as defined in Story Form Intake (Story Topic, Language, Image Style, Number of Scenes, Target Audience, Story Mood).
- Set the submit confirmation text in Respond With Options to
Your story is being created! This takes 2-5 minutes. Please wait....
Step 2: Validate Inputs and Generate Story Metadata
Use the initial code step to normalize the submission, clamp the number of scenes, and create a unique story ID.
- Connect Story Form Intake to Verify Submission Data.
- In Verify Submission Data, keep the provided JavaScript to validate scene count (1–12) and generate
storyId. - Confirm the output includes
numScenesandstoryIdfor later nodes.
Step 3: Set Up AI Narrative Generation
Configure the OpenAI chat completion request that generates the story structure, scenes, and image prompts.
- Connect Verify Submission Data to Compose Narrative (GPT-4).
- In Compose Narrative (GPT-4), set URL to
https://api.openai.com/v1/chat/completionsand Method toPOST. - Set JSON Body to the provided schema in the node, ensuring it includes the expressions like
{{ $json['Story Topic'] }}and{{ $json.numScenes }}. - Credential Required: Connect your openAiApi credentials in Compose Narrative (GPT-4).
Step 4: Build the Story, Generate Images, and Upload to Storage
This workflow uses multiple code nodes to prepare prompts, call DALL-E, and upload images to Firebase Storage.
- Connect Compose Narrative (GPT-4) to Shape Story Payload, and keep the JavaScript that sanitizes prompts and sets
dallePrompt. - Connect Shape Story Payload → Render Scene Images → Upload Images to Storage → Assemble Final Story in sequence.
- In Render Scene Images, replace
[CONFIGURE_YOUR_API_KEY]with your OpenAI key or adjust the code to use credentials. - In Upload Images to Storage, replace
[YOUR_ID]with your Firebase Storage bucket name and update[CONFIGURE_YOUR_TOKEN]if using download tokens. - In Assemble Final Story, keep the final story object mapping so it strips internal fields like
sceneIndexanddallePrompt.
[CONFIGURE_YOUR_API_KEY] or [YOUR_ID], the workflow will generate placeholder images or fail uploads.Step 5: Store the Story in Firestore and Respond
Finalize the story object, convert it to Firestore format, store it, and return a JSON response to the form submission.
- Connect Assemble Final Story to Build Firestore Document and keep the mapping that creates
body,collection,docId, andprojectId. - In Build Firestore Document, replace
[YOUR_ID]inFIREBASE_PROJECT_IDwith your Firebase project ID. - Connect Build Firestore Document to Store in Firestore.
- In Store in Firestore, set URL to
=https://firestore.googleapis.com/v1/projects/{{ $json.projectId }}/databases/(default)/documents/{{ $json.collection }}/{{ $json.docId }}. - Set JSON Body to
={{ JSON.stringify($json.body) }}. - Credential Required: Connect your googleApi credentials in Store in Firestore.
- Connect Store in Firestore to Send Webhook Reply and keep the response body with
{{ JSON.stringify($('Build Firestore Document').first().json.finalStory) }}.
Step 6: Test and Activate Your Workflow
Run a full test to confirm the form submission triggers story creation, image generation, Firestore storage, and a JSON response.
- Click Execute Workflow and submit a sample form in Story Form Intake.
- Verify that Compose Narrative (GPT-4) returns a valid JSON schema response with scenes and characters.
- Confirm images are generated in Render Scene Images and URLs are updated in Upload Images to Storage.
- Check that Store in Firestore writes the document to your
stories/{language}/itemscollection. - Ensure Send Webhook Reply responds with
"success": trueand the fullstoryobject. - When satisfied, toggle the workflow to Active for production use.
Troubleshooting Tips
- OpenAI credentials can expire, get revoked, or hit usage limits. If GPT-4 or DALL-E 3 calls fail, check your OpenAI dashboard and update the API key in the workflow variables first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Firebase permissions are the silent killer here. If Storage uploads or Firestore writes fail, verify the service account has access to the correct project, and confirm your FIREBASE_BUCKET and FIREBASE_PROJECT_ID values match what’s in the Firebase console.
Quick Answers
About 10–15 minutes if your keys and Firebase project are ready.
No coding required. You will edit a few variables in code nodes and paste credentials.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in OpenAI API costs for GPT-4 and DALL-E 3 image generation.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, and you should. Most teams customize the prompt inside the “Compose Narrative (GPT-4)” HTTP Request and adjust the “Shape Story Payload” code so the JSON matches their CMS fields. You can also change how many scenes are allowed, add a “brand voice” field in the form, or store extra metadata in the “Build Firestore Document” step (like campaign, audience, or approval status).
Usually it’s service account permissions or a mismatched project setting. Double-check that your Google Service Account can write to Firestore and upload to Storage, then verify FIREBASE_PROJECT_ID and FIREBASE_BUCKET in the workflow match your Firebase console. If it still fails, look at the HTTP Request error details in n8n; Firebase errors are often descriptive once you open the full response.
On a typical n8n Cloud plan you can run thousands of executions per month, and self-hosting removes execution caps (your server becomes the limit). In practice, image generation is the main bottleneck, so bigger stories with 10–12 scenes will take longer and cost more than 3–5 scene stories.
Often, yes. This workflow isn’t just “send data from A to B”; it loops over scenes, builds structured payloads, uploads files, and writes a database document, which is where simpler tools can get awkward or expensive. n8n also gives you more control over the JSON you store in Firestore, which matters if you plan to search, version, or reuse content later. Zapier or Make can still work if you keep the scope small, like one scene or a single image per story. If you’re unsure, Talk to an automation expert and you’ll get a straight recommendation.
Once this is running, illustrated stories stop being a one-off project and start being a repeatable system. You’ll spend your time improving ideas and editing, not chasing files and broken links.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.