BrowserAct + Google Gemini, review insights to Telegram
Reading reviews is easy. Turning 200 messy comments into a clear “here’s what to fix next” summary is the part that quietly wrecks your week.
Product managers feel it during roadmap planning, but e-commerce owners and marketing leads get pulled into it too. This review insights automation collects reviews for you, groups the themes, and sends a clean brief to Telegram and email.
You’ll see how this workflow gathers review data with BrowserAct, has Google Gemini generate sentiment + recommendations, then delivers the results where your team will actually read them.
How This Automation Works
The full n8n workflow, from trigger to final output:
n8n Workflow Template: BrowserAct + Google Gemini, review insights to Telegram
flowchart LR
subgraph sg0["When clicking ‘Execute workflow’ Flow"]
direction LR
n0["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request"]
n1["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/httprequest.dark.svg' width='40' height='40' /></div><br/>HTTP Request1"]
n2@{ icon: "mdi:cog", form: "rounded", label: "Wait", pos: "b", h: 48 }
n3@{ icon: "mdi:cog", form: "rounded", label: "Wait1", pos: "b", h: 48 }
n4@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If", pos: "b", h: 48 }
n5@{ icon: "mdi:swap-horizontal", form: "rounded", label: "If1", pos: "b", h: 48 }
n6@{ icon: "mdi:robot", form: "rounded", label: "AI Agent", pos: "b", h: 48 }
n7@{ icon: "mdi:message-outline", form: "rounded", label: "Send email in Send Email", pos: "b", h: 48 }
n8@{ icon: "mdi:message-outline", form: "rounded", label: "Send a text message in Teleg..", pos: "b", h: 48 }
n9@{ icon: "mdi:message-outline", form: "rounded", label: "Send email", pos: "b", h: 48 }
n10["<div style='background:#f5f5f5;padding:10px;border-radius:8px;display:inline-block;border:1px solid #e0e0e0'><img src='https://flowpast.com/wp-content/uploads/n8n-workflow-icons/telegram.svg' width='40' height='40' /></div><br/>Send a text message1"]
n11@{ icon: "mdi:play-circle", form: "rounded", label: "When clicking ‘Execute workf..", pos: "b", h: 48 }
n12@{ icon: "mdi:brain", form: "rounded", label: "Google Gemini Chat Model", pos: "b", h: 48 }
n4 --> n1
n4 --> n3
n5 --> n6
n5 --> n2
n2 --> n1
n3 --> n0
n6 --> n10
n0 --> n4
n1 --> n5
n10 --> n9
n12 -.-> n6
n7 -.-> n6
n8 -.-> n6
n11 --> n0
end
%% Styling
classDef trigger fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
classDef ai fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
classDef aiModel fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
classDef decision fill:#fff8e1,stroke:#f9a825,stroke-width:2px
classDef database fill:#fce4ec,stroke:#c2185b,stroke-width:2px
classDef api fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef code fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef disabled stroke-dasharray: 5 5,opacity: 0.5
class n11 trigger
class n6 ai
class n12 aiModel
class n4,n5 decision
class n0,n1 api
classDef customIcon fill:none,stroke:none
class n0,n1,n10 customIcon
The Problem: Review Data Is Useful, But It’s Not Usable
Product reviews are “truthy” in a way dashboards aren’t. They tell you what customers expected, what broke, and what made them angry enough to type a paragraph. The problem is volume and chaos. Reviews live on marketplaces, on-site widgets, and random threads, and you end up doing the same cycle: scrape a few, skim a few, guess the themes, then try to explain it all in Slack without sounding vague. After a while, you either ignore reviews (bad), or you drown in them (also bad). Honestly, most teams get stuck in that middle zone where nothing changes because the insights never get packaged into decisions.
It adds up fast. Here’s where it breaks down.
- You spend about 2 hours pulling reviews into something you can scan, and it still misses edge cases.
- The same complaints keep resurfacing because nobody has time to summarize them cleanly and share them.
- Manual summaries are inconsistent, so one person’s “minor issue” is another person’s “stop-ship bug.”
- Even when you find a clear pattern, it gets buried in chat, which means it never becomes an action item.
The Solution: Scrape Reviews, Then Let Gemini Write the Brief
This workflow starts with a manual run in n8n, which is useful when you want a fresh snapshot before a launch, a pricing change, or a roadmap meeting. It kicks off a BrowserAct scraping task through an HTTP request, then patiently checks back until the scrape is done. Once the dataset is ready, an AI Agent powered by Google Gemini reads the review summaries, identifies sentiment, and pulls out the repeating themes customers actually care about. Then it generates concrete improvement recommendations, not just “customers are unhappy.” Finally, the workflow posts the recommendations to Telegram and sends the same brief by email so it lands in both fast chat and slower inbox workflows.
The workflow begins when you run it manually. BrowserAct collects the reviews in the background while n8n uses wait-and-check loops to avoid half-finished data. Gemini then turns that raw text into a structured set of themes and next steps, and the final brief goes out via Telegram and email.
What You Get: Automation vs. Results
| What This Workflow Automates | Results You’ll Get |
|---|---|
|
|
Example: What This Looks Like
Say you sell one hero product and you want to review feedback every Friday. Manually, grabbing reviews, skimming, taking notes, and writing a summary can easily take about 3 hours (and you still forget to share it). With this workflow, you trigger the run in under a minute, BrowserAct scrapes while n8n waits and rechecks, then Gemini writes the themes and recommendations. You get a Telegram brief and an email in roughly 10 minutes of “hands-off” time.
What You’ll Need
- n8n instance (try n8n Cloud free)
- Self-hosting option if you prefer (Hostinger works well)
- BrowserAct for scraping product review pages.
- Google Gemini to generate sentiment themes and recommendations.
- BrowserAct API key (get it from your BrowserAct dashboard).
Skill level: Intermediate. You will connect credentials, edit a few fields, and test a scrape loop with wait checks.
Don’t want to set this up yourself? Talk to an automation expert (free 15-minute consultation).
How It Works
Manual run kicks things off. You trigger the workflow when you want a fresh review snapshot (weekly, before a launch, after a refund spike, whenever).
BrowserAct collects reviews in the background. n8n sends an HTTP request to start the scraping task, then checks the task status until BrowserAct returns a complete dataset you can trust.
Gemini turns raw text into themes and next steps. The AI Agent reads the scraped review summaries, identifies sentiment patterns, and outputs actionable recommendations you can hand to product, support, or marketing without rewriting.
Insights get delivered where decisions happen. The workflow posts the brief to Telegram and follows up with an email, so it survives beyond a chat scroll.
You can easily modify the scraping target and the output format to match your product category and reporting style. See the full implementation guide below for customization options.
Step-by-Step Implementation Guide
Step 1: Configure the Manual Trigger
Start the workflow manually to initiate the review extraction and analysis sequence.
- Add the Manual Run Trigger node to the canvas.
- Connect Manual Run Trigger to BrowserAct Task Start.
- (Optional) Keep Flowpast Branding as a visual reference note for documentation.
Step 2: Connect BrowserAct
Trigger the BrowserAct workflow and check the task status using authenticated API requests.
- Open BrowserAct Task Start and set URL to
https://api.browseract.com/v2/workflow/run-task. - Set Method to
POSTand enable Send Body =true. - Add a body parameter: workflow_id =
53146166844202600. - Credential Required: Connect your
httpBearerAuthcredentials in BrowserAct Task Start. - Open BrowserAct Task Status and set URL to
https://api.browseract.com/v2/workflow/get-task. - Enable Send Query and set query parameter task_id to
{{ $json.id }}. - Credential Required: Connect your
httpBearerAuthcredentials in BrowserAct Task Status.
Execution Flow: BrowserAct Task Start → Validate Task Response → BrowserAct Task Status → Check Task Complete.
Step 3: Set Up Task Validation and Wait Loops
Validate task responses, poll status until completion, and handle retry timing.
- In Validate Task Response, set two conditions:
- leftValue =
{{ $json.error }}with operatornotExists. - leftValue =
{{ $json.id }}with operatornotEqualsand rightValue =null.
- leftValue =
- Connect the true output of Validate Task Response to BrowserAct Task Status.
- Connect the false output of Validate Task Response to Delay Before Check, and then back to BrowserAct Task Start.
- Configure Delay Before Check with Unit =
minutesand Amount =1. - In Check Task Complete, set conditions:
- leftValue =
{{ $json.error }}with operatornotExists. - leftValue =
{{ $json.status }}with operatorequalsand rightValue =finished.
- leftValue =
- Connect the false output of Check Task Complete to Pause Before Retry, then back to BrowserAct Task Status.
- Set Pause Before Retry to Unit =
minutesand Amount =1.
Execution Flow: Check Task Complete → Review Insight Agent when status is finished; otherwise Pause Before Retry → BrowserAct Task Status.
Step 4: Set Up Review Insight Agent
Analyze review summaries and generate improvement recommendations using the AI model and tools.
- Open Review Insight Agent and set Prompt Type to
define. - Set Text to the full prompt, including the expression
{{ $json.output.string }}:=from the {{ $json.output.string }} you can see teh reviews it contains list of map like this [ { "Name": "Rating": "Summary": } ] read every single item "Summary" Analyze them and after you finished generate improvement recommendations and send them into the output in text try to send these recommendations to Telegram and email with further details.
- Connect Gemini Chat Model as the language model for Review Insight Agent.
Credential Required: Connect yourgooglePalmApicredentials in Gemini Chat Model. - Attach Email Tool Dispatch and Telegram Tool Notify as AI tools to Review Insight Agent.
- Credential Required: Connect your
smtpcredentials for Email Tool Dispatch and yourtelegramApicredentials for Telegram Tool Notify. These tools are invoked by Review Insight Agent.
Note: The AI tool nodes use expressions for content, such as {{ /*n8n-auto-generated-fromAI-override*/ $fromAI('Text', ``, 'string') }}. Leave these as-is to let the agent supply tool output.
Step 5: Configure Output Notifications
Send the AI recommendations to Telegram and then email.
- In Telegram Recommendation, set Text to
=Recomendation : {{ $json.output }}and Chat ID to your Telegram user or channel ID. - Credential Required: Connect your
telegramApicredentials in Telegram Recommendation. - Connect Telegram Recommendation to Email Recommendation Send.
- In Email Recommendation Send, set Text to
{{ $('Review Insight Agent').item.json.output }}, Subject toRecomendation, and set To Email/From Email to your addresses. - Credential Required: Connect your
smtpcredentials in Email Recommendation Send.
[YOUR_EMAIL] and [YOUR_ID] in Email Recommendation Send, Email Tool Dispatch, Telegram Recommendation, and Telegram Tool Notify before testing.Step 6: Test and Activate Your Workflow
Run the workflow manually to confirm the BrowserAct task completes, the AI generates recommendations, and notifications are delivered.
- Click Execute Workflow from Manual Run Trigger to start the run.
- Confirm that BrowserAct Task Start returns an
id, and BrowserAct Task Status eventually reportsfinished. - Verify Review Insight Agent outputs recommendation text and that Telegram Recommendation and Email Recommendation Send deliver messages.
- When successful, toggle the workflow to Active for production use.
Common Gotchas
- BrowserAct credentials can expire or need specific permissions. If things break, check your BrowserAct dashboard API key and template access first.
- If you’re using Wait nodes or external rendering, processing times vary. Bump up the wait duration if downstream nodes fail on empty responses.
- Default prompts in AI nodes are generic. Add your brand voice early or you’ll be editing outputs forever.
Frequently Asked Questions
About 30 minutes if your accounts and credentials are ready.
No. You’ll mainly connect BrowserAct, Gemini, and Telegram in n8n. The only “techy” part is testing the scrape status checks once.
Yes. n8n has a free self-hosted option and a free trial on n8n Cloud. Cloud plans start at $20/month for higher volume. You’ll also need to factor in BrowserAct usage and Gemini API costs based on how many reviews you analyze.
Two options: n8n Cloud (managed, easiest setup) or self-hosting on a VPS. For self-hosting, Hostinger VPS is affordable and handles n8n well. Self-hosting gives you unlimited executions but requires basic server management.
Yes, but you’ll do it at the scraping layer. Swap the BrowserAct scraping template or target URL(s) used in the “BrowserAct Task Start” HTTP request, then keep the same Gemini Agent prompt structure so your output stays consistent. Common tweaks include filtering for 1–3 star reviews, grouping themes by product variant, and changing the Telegram message format to a shorter “top 5 findings” summary.
Usually it’s an API key issue or a missing permission on the BrowserAct account tied to your template. Regenerate the BrowserAct API key, update it in n8n, and rerun the manual trigger. If the workflow starts but never completes, the task status check may be pointing at the wrong workflow ID or the wait time is too short for a big scrape.
Most small teams run a few hundred reviews per run without issues.
Often, yes, because this workflow needs looping waits and conditional checks while BrowserAct finishes scraping, and n8n handles that kind of control flow cleanly. You also get the option to self-host, which is helpful if you plan to run review analysis frequently without watching execution limits. Another win is flexibility in the AI step: the Agent prompt can be adjusted to output exactly what your team needs (themes, examples, recommended fixes, even release-note wording). If you only want a simple “new review → send alert” setup, Zapier or Make may feel quicker. Talk to an automation expert if you want help choosing.
When reviews turn into a readable brief automatically, you stop “trying to listen” and start acting on what customers keep telling you. Set it up once, then run it whenever you need clarity.
Need Help Setting This Up?
Our automation experts can build and customize this workflow for your specific needs. Free 15-minute consultation—no commitment required.