AI Automation Features
Everything you need to automate with AI — transparently and under your control. Recipes, workflows, DAG orchestration, prompt studio, knowledge bases, brand voice, scheduling, approvals, AI Bakeoffs, quality monitoring, batch execution, version control, real-time collaboration, and multi-provider LLM access — all in one platform with full audit trails.
Explore our new Platform pages for the complete story
Go to Platform Overview →Recipes
Reusable AI operations with structured I/O
Recipes are the building blocks of JieGou. Each recipe is a prompt template with defined input and output schemas, so you get consistent, structured results every time.
- JSON Schema-based inputs and outputs
- Template variables with {{placeholder}} syntax
- AI-assisted recipe generation from plain English
- Attach documents for additional context
- Version tracking and feedback ratings
- Community sharing and template library
- Image, file, and audio upload with provider-aware handling
- Natural-language recipe generation from plain English descriptions
Workflows
Multi-step automation with eleven step types
Workflows chain recipes into sophisticated automations. Sequential or DAG execution, conditional branching, iteration, parallel execution, human approval gates, LLM prompts, eval quality gates, routers, and aggregators give you full control over complex processes.
- Recipe steps with automatic input mapping
- Condition steps with 8 comparison operators
- Loop steps for batch processing
- Parallel steps for concurrent execution
- Approval steps with email notifications
- LLM, Eval, Router, and Aggregator steps for multi-agent orchestration
- DAG execution mode with convergence loops and crash-recovery checkpointing
- Natural-language workflow generation — describe what you need and JieGou builds the steps
- Visual drag-and-drop canvas with zoom, pan, minimap, and snap-to-grid
Scheduling & Triggers
Run automations on your terms
Schedule recipes and workflows with cron expressions, fire them from webhooks, or react to events in real time. Four types of event-driven triggers let you chain runs, monitor emails and Slack, detect connector data changes, and respond to browser events.
- Cron-based scheduling with timezone support
- Webhook triggers with token authentication
- Event-driven triggers — run completion chaining, email/Slack sources, connector data changes, browser events
- Google Sheets integration for dynamic inputs
- API-based input resolution with JSONPath
- Email notifications on success or failure
- Output webhooks to push results externally
Conversational AI
Inline AI chat — available on every page
No navigation needed. An inline chat surface is available on every page of the console. Have multi-turn conversations where the AI remembers context, accesses your connected tools, and suggests relevant recipes.
- Inline chat surface on every page — no navigation required
- Department-aware system prompts
- Recipe suggestion chips in conversation
- MCP tool access during chat
- Persistent conversation history
- Document attachment for context
- Model selection per conversation
Department Packs
Pre-built automation for every team
Get started instantly with curated packs for Sales, Marketing, Support, HR, Finance, Operations, Legal, Engineering, and Executive teams.
- 20 department-specific packs
- 7-10 recipes per pack
- 2-5 workflows per pack
- Suggested schedules included
- Integration recommendations
- Step-by-step setup guides
Analytics & Observability
Measure what matters
Track every execution across recipes, workflows, and schedules. Monitor costs, success rates, and token usage with department-filtered dashboards.
- Execution metrics and success rates
- Token usage by provider and model
- Cost estimation and overage tracking
- Department-filtered analytics views
- Template health scoring
- Time-saved ROI estimation
Knowledge & Documents
Give your AI real company knowledge
Upload files, crawl entire websites, or connect 12 external knowledge sources. Built-in Firestore vector search with Redis caching delivers sub-second retrieval. JieGou automatically chunks, embeds, and retrieves the most relevant context for every AI operation. Agent Workspaces add governed persistent memory across workflow executions.
- Upload files and import URLs
- Website crawl pipeline — sitemap discovery, smart filtering, JS rendering, incremental refresh
- Connect 12 external sources — Coveo, Elasticsearch, Algolia, Glean, Pinecone, Vectara, Guru, Confluence, Notion, Zendesk, Google Drive, OneDrive
- Built-in Firestore vector search — no external vector DB required
- Hybrid retrieval with Redis caching for sub-second warm queries
- Automatic chunking and embedding
- RAG retrieval for recipes and workflows
- Knowledge base grouping with auto-context
- Encrypted credentials, health checks, and audit trails for every knowledge source
- Scoped injection by recipe, workflow, or department
- Freshness tracking and re-indexing
- Agent Workspaces: governed cross-workflow persistent memory with provenance tracking
Multi-Provider LLM
Any model, any provider — your choice
Run Claude, GPT-5, Gemini, or open-source models like Llama 4 and DeepSeek from a single platform. Bring your own API keys, connect local inference servers, or use platform-provided keys. Pick a different model for every recipe, workflow step, and conversation.
- Anthropic (Claude Sonnet/Haiku/Opus), OpenAI (GPT-5.x, o3/o4-mini), and Google (Gemini 3/2.5) with BYOK
- OpenAI-compatible endpoints — connect any custom base URL for fine-tuned or self-hosted models
- Certified open-source registry: Llama 4 Maverick, DeepSeek V3.2, Qwen 3 235B, Mistral 3 Large on vLLM
- Auto-discovery of local Ollama and vLLM servers — models appear automatically
- Per-step model selection in workflows — use the best model for each task
- Model recommendation engine scoring success rate, cost efficiency, and speed from your execution history
- AES-256-GCM envelope encryption for API keys with per-account HKDF key derivation
- Per-provider circuit breakers, priority-based concurrency limits, and unified cost tracking
Brand Voice & Governance
Consistent AI outputs across your organization
Define your brand voice once and apply it everywhere. Set tone, formality, glossary terms, and prompt rules that every AI output respects — with full audit logging and approval policies.
- Tone and formality settings
- Glossary with preferred and prohibited terms
- Reusable prompt fragments
- Audit log tracking 30+ action types
- Multi-approver policies with escalation
- RBAC with department scoping
Real-Time Collaboration
Work together on AI workflows in real time
JieGou goes beyond sharing outputs. Your team can co-browse the console, chat in the context of specific runs, and screen-share to review AI results together.
- Presence indicators — see who's online and where they are
- Contextual chat on runs, recipes, and workflows
- Platform-wide team chat with search
- Screen sharing with one-click invite
- Follow mode — watch a teammate's cursor in real time
- Session recording and replay for async review
AI Evaluation & Bakeoffs
Don't just run AI — measure it
Compare recipes, models, and full workflows side by side. Use LLM-as-judge scoring, multi-judge consensus, and live A/B routing to find the best configuration for every use case.
- Recipe vs. recipe and model vs. model comparison
- Multi-judge evaluation with inter-rater correlation
- Statistical confidence intervals (95% CI)
- Live A/B test routing with auto-stop conditions
- Synthetic input generation from schemas
- Workflow-level head-to-head comparison
Workflow Orchestration
Orchestrate multi-stage automations with DAG workflows
Build directed acyclic graph (DAG) workflows where steps run concurrently based on dependencies. Use SubWorkflowStep to invoke other workflows as steps, giving you composable multi-stage orchestration from a single canvas.
- DAG execution mode with visual canvas editor
- SubWorkflowStep to compose workflows within workflows
- Automatic dependency resolution and concurrent wave execution
- Convergence loops for iterative refinement with quality gates
- Pattern templates: Critic/Refiner, Specialist Router, Debate/Consensus, Plan/Execute/Verify
- Crash-recovery checkpointing with resume from any point
Prompt Engineering Studio
Craft, test, and optimize prompts systematically
Go beyond trial-and-error prompt writing. The Prompt Engineering Studio gives you version tracking, live token budget visualization, a variable inspector, few-shot example curation, and an AI-powered optimizer that suggests improvements.
- Full version history with diff comparison
- Live token count and budget visualization per model
- Variable inspector — see all placeholders and their values
- Few-shot example curation with drag-and-drop ordering
- AI-powered prompt optimizer with improvement suggestions
- Side-by-side output preview across prompt versions
Quality Guard
Continuous quality monitoring for AI outputs
Attach a Quality Guard to any recipe and get automatic LLM-as-judge evaluation on every production run. Set quality thresholds, track trends over time, and get alerts when output quality drifts below your standards.
- Automatic LLM-as-judge scoring on every run
- Configurable quality criteria and thresholds
- Quality trend charts with drift detection
- Alerts when scores drop below your baseline
- Per-recipe quality dashboard on the detail page
- Pairs with bakeoffs for a full evaluation workflow
Batch Execution
Process entire datasets in one click
Upload a CSV or paste a data table, then run any recipe across every row. Track progress in real time, filter results, and export outputs as CSV or JSON for downstream use.
- CSV upload or paste-in data table
- Run any recipe across all rows in parallel
- Real-time progress tracking per row
- Filter and search results by status or content
- Export results as CSV or JSON
- Configurable concurrency and retry settings
Workflow Version Control
Ship changes safely with version control and canary rollouts
Every workflow edit creates an immutable version. Compare any two versions with a visual diff engine. When you're ready to deploy, use canary rollouts to send a percentage of traffic to the new version before promoting it to production.
- Immutable version snapshots on every save
- Visual diff engine for step-by-step comparison
- Canary rollouts — route a percentage of traffic to the new version
- One-click promotion from canary to production
- Rollback to any previous version instantly
- Version history with author and timestamp metadata
Instagram DM Automation
Automate Instagram DMs with AI
Connect your Instagram Business account to JieGou and let AI handle incoming DMs. Triage messages by priority, auto-respond to FAQs with confidence scoring, draft contextual replies, and escalate complex conversations — all with full governance controls.
- Instagram Graph API webhook integration with HMAC-SHA256 signature verification via Meta platform
- FAQ knowledge base with automatic chunking, embedding, and RAG retrieval
- Confidence-scored auto-replies — high-confidence answers sent instantly, low-confidence escalated
- Story reply handling — process replies to your Stories as engagement or support triggers
- Media attachment support — images, videos, audio, and stickers in both directions
- Pre-built Instagram Support Pack with triage, FAQ bot, response drafter, and escalation recipes
- Multi-language support — AI adapts response language to match the incoming message
- Full audit trails, approval gates, and governance controls
Persistent Memory
AI agents that remember — 5 layers of persistent memory
Give your agents institutional memory that compounds over time. A 5-layer hierarchy mirrors how organizations store and retrieve knowledge — from individual entity facts to cross-workflow insights.
- Entity Memory — persistent facts about customers, products, and projects with LLM compaction
- Workflow Memory — per-workflow accumulated knowledge from execution history
- Department Memory — CLAUDE.md-equivalent institutional knowledge per department
- Agent Memory — per-agent persistent state across conversations and tasks
- Cross-Workflow Memory — entity memory shared across all workflows in a department
- LLM compaction prevents unbounded growth while preserving signal
- Full governance integration — audit trails, RBAC, and department scoping
Department Memory
Institutional knowledge for every AI agent
Department Memory gives every agent institutional context about its department. Auto-populated from installed recipes, templates, and active workflows. Always-on — not query-triggered like a knowledge base.
- Auto-populates from installed recipes, templates, and active workflows
- Always-on context injection into every agent interaction and workflow execution
- Per-department scoping — Marketing, Support, Finance, Sales, Engineering each get their own memory
- Grows as new workflows are added and agents learn from execution
- Not a knowledge base — structured context (rules, preferences, procedures), not document retrieval
- The CLAUDE.md equivalent for enterprise departments
Facebook Messenger Automation
Automate Facebook Messenger with AI
Connect your Facebook Page to JieGou and let AI handle Messenger conversations. Triage messages, auto-respond to FAQs, generate Quick Reply buttons for guided interactions, and manage Persistent Menus — with full governance and audit trails.
- Messenger Platform API webhook integration with HMAC-SHA256 signature verification via Meta platform
- FAQ knowledge base with automatic chunking, embedding, and RAG retrieval
- Confidence-scored auto-replies — high-confidence answers sent instantly, low-confidence escalated
- Quick Reply buttons — AI generates guided conversation options instead of open-ended text
- Sender Actions — typing indicators and read receipts for natural conversation pacing
- Persistent Menu management for always-available navigation options
- Pre-built Messenger Support Pack with triage, FAQ bot, response drafter, and escalation recipes
- Postback payload routing — handle menu selections and button clicks as workflow triggers
- Full audit trails, approval gates, and governance controls
LINE Integration
Automate LINE customer support with AI
Connect your LINE Official Account to JieGou and let AI handle incoming messages. Upload FAQ knowledge bases, auto-match customer questions with confidence scoring, and respond with rich Flex Messages — while routing low-confidence queries to human agents.
- LINE Messaging API webhook integration with HMAC-SHA256 signature verification
- FAQ knowledge base with automatic chunking, embedding, and RAG retrieval
- Confidence-scored auto-replies — high-confidence answers sent instantly, low-confidence escalated
- Rich Flex Message responses with cards, buttons, images, and carousels
- Free reply-token responses within 60-second window, push message fallback
- Auto-capture: successful responses feed back into the FAQ knowledge base
- Pre-built LINE Support Starter Pack with triage, FAQ bot, response drafter, and escalation recipes
- Multi-language support — Japanese, Thai, Traditional Chinese, and English
- Full audit trails, approval gates, and governance controls
AI Models
Choose the right model for every task
Access 9 LLM providers with full cost transparency. Switch models per recipe or workflow step. Use AI Bakeoffs to prove which model works best for each workflow. Your API keys, your data — JieGou never stores or trains on your content.
Anthropic
- Claude Opus 4.6
- Claude Sonnet 4.6
- Claude Haiku 4.5
OpenAI
- GPT-5.2 / GPT-5.1
- o3 / o4-mini
- GPT-5-mini / nano
- Gemini 3.1 Pro / Flash
- Gemini 2.5 Pro / Flash
- Gemini 2.5 Flash Lite
Mistral
Groq
xAI (Grok)
AWS Bedrock
Azure OpenAI
OpenAI-compatible
Workflow Engine
Ten powerful step types
Compose any automation logic with recipe, condition, loop, parallel, approval, LLM, eval, router, aggregator, and coding agent steps.
Execute an AI recipe with automatic input mapping from previous steps. Includes retry with exponential backoff.
Branch execution based on value comparisons. 8 operators including equals, contains, greater_than, and is_empty.
Iterate over a collection, running nested steps for each item. Automatically aggregates iteration outputs.
Run multiple branches concurrently. Each branch gets an independent output snapshot for isolation.
Pause for human review with full transparency. Approvers receive email notifications, see the complete execution context, and can provide input before the workflow continues.
Issue a direct LLM prompt without a recipe wrapper. Ideal for lightweight transformations and quick decisions.
Score outputs against quality criteria. Trigger convergence loops for iterative refinement until quality thresholds are met.
Classify inputs with an LLM and branch to specialist routes. Each route executes a different downstream path.
Combine outputs from parallel branches via merge, best-of, or synthesize strategies.
Execute a sandboxed coding agent that writes, tests, and iterates on code within Docker containers with V8 isolate safety. Ideal for data transformations, report generation, and programmatic tasks.
Deep dives
Explore our newest capabilities
Click through for full details on these flagship features.
Real-Time Collaboration
Presence awareness, contextual chat, screen sharing, follow mode, and session recording.
Learn more →
AI Evaluation & Bakeoffs
Recipe and model comparison, multi-judge scoring, A/B test routing, and synthetic inputs.
Learn more →
Browser Automation
AI chat, command palette, agentic browsing, flow recording, and 60+ MCP browser tools for Gmail, Slack, Jira, and more.
Learn more →
Workflow Orchestration
DAG execution, convergence loops, sub-workflow steps, and pattern templates for multi-agent orchestration.
Learn more →
Prompt Engineering Studio
Version tracking, token budgets, variable inspector, few-shot curation, and AI-powered optimizer.
Learn more →