Skip to content
Guides

Beyond Cron and Webhooks: Four Event-Driven Triggers for Reactive AI Automation

JieGou adds four event-driven trigger types — run completion chaining, connector data changes, email received, Slack messages, and browser events — to complement existing cron and webhook triggers for fully reactive automation.

JT
JieGou Team
· · 10 min read

Cron schedules and webhooks cover the basics. Run this recipe every morning at 8 AM. Fire this workflow when a third-party system sends a POST request. That handles a lot of use cases.

But real automation needs to react to things happening — a workflow finishing, an email arriving, data changing in a connected tool, a specific element appearing on a web page. Polling on a timer and hoping you catch the event is wasteful and slow. Waiting for someone to wire up a webhook on the other end assumes the other end supports webhooks.

JieGou now supports four event-driven trigger types that complement the existing cron and webhook options. Each one watches for a specific kind of event and fires your recipe or workflow when it occurs.

The full trigger landscape

JieGou supports seven trigger types across two categories:

TriggerModelRate limitUse case
Cron scheduleTimerN/ARun every hour, daily at 9 AM, etc.
WebhookPush12/min per triggerExternal system sends HTTP POST
Run completedPush6/min per trigger, 30/min per accountChain recipes/workflows together
Connector changedPoll (60s–24h)Per poll intervalReact to CRM, spreadsheet, or database changes
Email receivedPoll (default 300s)5 messages per cycleIncoming email triggers triage
Slack messagePushPer Slack Events APIChannel message triggers extraction
Browser eventPush20/min per userDOM change triggers investigation

Cron and webhook triggers are available on all plans. The four new event-driven triggers are available on Pro plans and above.

Run completion chaining

Type: run_completed | Model: Push-based via in-process event bus

The most common automation pattern is “when X finishes, start Y.” A summarization recipe completes, so a distribution workflow should start. A data enrichment workflow finishes, so a scoring recipe should run against the results.

Run completion chaining handles this with zero polling. It uses an in-process event bus — when a run finishes, the event fires immediately, not on the next poll cycle.

Configuration options:

  • Watch target: A specific recipe, a specific workflow, or “any run in this account”
  • Status filter: Trigger only on success, only on error, or both
  • Output mapping: Three modes for extracting data from the completed run’s payload

The three output mapping modes give you control over what data flows into the triggered run:

  • Passthrough — The entire event payload becomes the input. Useful when the downstream recipe expects the full context.
  • Field mapping — Extract specific fields using dot-path notation (e.g., event.output.summary maps to the summary input field).
  • Template — Use {{variable}} substitution with nested path support for more complex transformations.

The rate limit is 6 triggers per minute per trigger — deliberately half the webhook rate of 12/min. This prevents cascade storms where a chain of triggers amplifies into hundreds of concurrent runs. The account-wide cap of 30/min provides an additional backstop.

Example pipeline: A content creation recipe generates a blog draft. On success, a distribution workflow triggers that posts to social channels and schedules email sends. On completion of distribution, a reporting recipe triggers that aggregates engagement metrics. Three recipes, fully automated, no cron schedule guessing when each stage finishes.

Connector data change detection

Type: connector_changed | Model: Poll-based via Cloud Scheduler

Not every system sends webhooks when data changes. CRM records update silently. Spreadsheet cells change without notification. Database rows get modified by background jobs.

Connector change detection polls your connected data sources at configurable intervals (60 seconds to 24 hours) and fires when the data actually changes.

How change detection works:

  1. Each poll cycle fetches the current data from the connector
  2. The system computes a SHA-256 hash of the response
  3. The hash is compared against the previously stored hash
  4. If the hashes differ, the trigger fires

The first poll after setup stores the hash without firing. This avoids false positives — you don’t want every trigger firing the moment you configure it just because “the data is different from nothing.”

Change type filters:

  • Any change — Any difference in the hashed data
  • New records — Only fires when the record count increases
  • Specific field changes — Monitor particular fields and ignore changes to others

Field mapping with transforms: When the trigger fires, you can map connector fields to recipe inputs with built-in transforms — string, number, boolean, json_parse, and date_iso. A CRM “deal value” field stored as a string can be automatically parsed as a number before it reaches your scoring recipe.

Example: A CRM connector monitors the “Leads” object. When a new lead appears or an existing lead’s status changes to “Qualified,” the trigger fires a lead scoring workflow that enriches the lead from three data sources and assigns a priority score.

Email received

Type: email_received | Model: Poll-based via Gmail OAuth

Email remains the entry point for a surprising number of business processes. Support requests, vendor invoices, alert notifications, approval responses — they all arrive in an inbox.

The email trigger integrates with Gmail via OAuth MCP tools and polls for new messages matching your criteria.

Configuration:

  • Gmail query syntax for filtering — e.g., from:alerts@example.com subject:urgent or label:support-inbox is:unread
  • Poll interval: Default 300 seconds, configurable
  • Max messages per cycle: Up to 5 (configurable) — prevents a backlog of 200 emails from spawning 200 concurrent runs
  • Watermark tracking via lastProcessedEmailId — the system remembers the last email it processed, so it never re-processes the same message even if it still matches the query

Email-to-input mapping extracts structured data from each email:

Email fieldMaps to
subjectText input field
senderText input field
bodyText or long-text input field
dateDate input field
headersJSON input field

Example: Support emails matching label:support-inbox is:unread trigger a triage workflow. The workflow classifies the issue by category and urgency, drafts a response using relevant KB articles, and routes high-priority issues to the on-call team via Slack.

Slack message

Type: slack_message | Model: Push-based via Slack Events API

Some teams run their entire operation out of Slack. Status updates, customer escalations, deployment notifications, decision requests — it all happens in channels.

The Slack message trigger uses the Slack Events API to receive messages in real time. No polling, no delay.

Filtering:

  • Channel ID — Required. The trigger watches one specific channel.
  • Keyword or regex pattern — Optional. Only messages matching the pattern fire the trigger.
  • Automatic noise filtering — The trigger ignores bot messages, message edits, and system messages. It filters for event.type === 'message' with no subtype, so you only get genuine human-authored messages.

Slack-to-input mapping:

Slack fieldMaps to
textMessage content
userSlack user ID
channelChannel ID
timestampMessage timestamp

Example: An #action-items channel receives messages throughout the day. Each message triggers an extraction recipe that identifies action items, assigns owners based on @-mentions, sets due dates from natural language (“by Friday”), and posts a structured summary back to a #action-tracker channel.

Browser event monitoring

Type: browser_event | Model: Push-based via browser extension

This one is different from the others. Instead of watching a service or inbox, it watches what’s happening in the browser itself — the DOM of a web page.

The browser extension monitors pages for specific conditions and fires a trigger when they occur. Five DOM condition types are supported:

ConditionWhat it watches
element_appearsA CSS selector matches an element that wasn’t there before
element_disappearsA previously matched element is removed
text_changesThe text content of a matched element changes
attribute_changesAn attribute (class, data-*, etc.) of a matched element changes
url_changesThe page URL changes (supports wildcard patterns)

URL pattern matching with wildcard support ensures the trigger only fires on relevant pages — https://monitoring.example.com/dashboard/* won’t fire on unrelated tabs.

CSS selector targeting with data extraction: The extractSelectors configuration maps input fields to CSS selectors. When the trigger fires, the extension reads the current text or attribute values from those selectors and passes them as structured input.

Debounce: DOM changes can be noisy — a single user action might cause dozens of mutations. The configurable debounce (minimum 1,000ms, default 5,000ms) collapses rapid changes into a single trigger event. The rate limit of 20 browser events per minute per user provides an additional guard.

Example: A monitoring dashboard shows a red alert banner when a service degrades. The trigger watches for element_appears on .alert-banner.critical. When it fires, it extracts the alert text and affected service name via extractSelectors, then triggers an investigation workflow that queries logs, checks recent deployments, and drafts an incident summary.

Architecture: pluggable event source handlers

All seven trigger types share the same execution path. The architecture uses a pluggable event source handler pattern:

  1. Every source type implements the EventSourceHandler interface
  2. A registry maps source type strings (run_completed, connector_changed, etc.) to handler instances
  3. Push-based triggers (run completion, Slack, browser) deliver events directly; poll-based triggers (connector, email) run on Cloud Scheduler intervals
  4. Both paths converge in trigger-base.ts — the same execution logic that processes webhook triggers processes event triggers

This shared path means event triggers get the same deduplication, input resolution, and execution history as webhooks. Deduplication keys prevent the same event from firing a trigger twice — critical for push-based sources where network retries could deliver duplicate events.

Observability is built in via three Prometheus metrics:

  • event_triggers_total — Counter by source type and status
  • event_trigger_duration_seconds — Histogram of trigger-to-execution latency
  • event_polling_total — Counter for poll-based triggers, tracking cycles with and without changes

Execution history

Every trigger maintains a per-trigger run history. The history view shows:

  • Color-coded source type badges — Quickly distinguish webhook-triggered runs from email-triggered or browser-triggered runs
  • Expandable rows — Click into any entry to see the raw event payload and the resolved input that was passed to the recipe or workflow
  • Links to resulting runs — Jump directly from the trigger event to the recipe or workflow run it spawned

This makes debugging straightforward. If a connector change trigger fired but the resulting workflow produced unexpected output, you can trace from the trigger event to the resolved input to the execution trace in three clicks.

Availability

Event-driven triggers (run completion, connector changed, email received, Slack message, and browser event) are available on Pro plans and above. Webhook and cron triggers are available on all plans. See all features or start your free trial.

triggers event-driven automation webhooks workflows
Share this article

Enjoyed this post?

Get workflow tips, product updates, and automation guides in your inbox.

No spam. Unsubscribe anytime.