Skip to content

AI Automation Glossary

Plain-language definitions for the key concepts behind AI-powered workflow automation.

AI Recipes

An AI recipe is a reusable, single-operation AI task with a structured prompt, typed input schema, and typed output schema. Recipes are the fundamental building blocks of AI automation in JieGou — each one performs one well-defined operation, like scoring a lead, drafting an email, or extracting invoice data.

Read full definition →

AI Workflows

An AI workflow is a multi-step automation that chains AI recipes together with control flow logic — conditions, loops, parallel branches, human approval gates, LLM prompts, eval quality gates, routers, and aggregators. Workflows support sequential or DAG execution and transform isolated AI tasks into end-to-end business processes that run with minimal human intervention.

Read full definition →

BYOK (Bring Your Own Key)

BYOK (Bring Your Own Key) is a model where you connect your own LLM provider API keys — from Anthropic, OpenAI, or Google — to the automation platform. Instead of paying the platform for AI usage, you pay the providers directly. Your prompts and responses flow between you and the provider without the platform seeing your data.

Read full definition →

MCP (Model Context Protocol)

MCP (Model Context Protocol) is an open protocol that standardizes how AI models interact with external tools and services. Instead of building custom integrations for every tool, MCP provides a universal interface: tools expose their capabilities, and AI models call them through structured requests and responses. JieGou uses MCP for browser automation and external tool integration.

Read full definition →

Department Packs

A department pack is a curated bundle of AI recipes, multi-step workflows, suggested schedules, and governance settings designed for a specific business function. JieGou offers 20 department packs covering Sales, Marketing, Support, HR, Finance, Legal, IT & Security, Operations, and more. Installing a pack gives a team production-ready AI automation in minutes.

Read full definition →

AI Governance

AI governance encompasses the policies, technical controls, organizational processes, and oversight mechanisms that ensure AI systems operate safely, transparently, and within regulatory boundaries. In the context of AI automation platforms, governance includes access control (who can build and run AI), tool approval gates (which external services AI can access), audit logging (what AI did and when), cost controls (budget limits per department), and compliance alignment (mapping AI operations to frameworks like EU AI Act, NIST AI RMF, and ISO 42001).

Read full definition →

Prompt Template

A prompt template is a reusable set of instructions for a large language model with placeholder variables that are filled at runtime with specific data. Unlike one-off prompts typed into a chat window, templates are versioned, tested, and optimized over time. In JieGou, every AI recipe is built on a prompt template with typed input schemas, output schemas, and optional knowledge base context.

Read full definition →

Large Language Model (LLM)

A large language model (LLM) is a neural network trained on massive text corpora that can understand instructions, reason about data, and generate human-quality text. LLMs power the AI capabilities in platforms like JieGou — when a recipe runs, it sends a structured prompt to an LLM (such as Claude, GPT, or Gemini) and receives a text response that is parsed into structured output fields.

Read full definition →

AI Agent

An AI agent is an AI system that goes beyond generating text to autonomously reasoning about tasks, making decisions, using external tools, and taking actions to achieve goals. Unlike a chatbot that responds to individual prompts, an agent can break down complex goals into sub-tasks, call APIs, read and write data, and iterate until the objective is complete.

Read full definition →

RAG (Retrieval-Augmented Generation)

Retrieval-Augmented Generation (RAG) is a technique that enhances LLM outputs by retrieving relevant documents from a knowledge base and including them as context in the prompt. Instead of relying solely on the model's training data, RAG grounds responses in your organization's specific documents, policies, and data — reducing hallucination and increasing accuracy.

Read full definition →

RBAC (Role-Based Access Control)

Role-Based Access Control (RBAC) is a security model that assigns permissions to predefined roles (like Owner, Admin, Manager, Editor, Viewer) rather than individual users. Each role has a specific set of capabilities — who can create recipes, run workflows, approve actions, manage API keys, or view audit logs. RBAC ensures the principle of least privilege: users get exactly the access they need, nothing more.

Read full definition →

AI Bakeoff

An AI Bakeoff is a structured evaluation that compares multiple AI configurations — different LLM models, prompt variations, or workflow designs — on identical inputs using LLM-as-judge automated scoring. Bakeoffs produce ranked results with statistical confidence intervals, helping teams make data-driven decisions about which model or prompt to use in production.

Read full definition →

Webhook

A webhook is an HTTP endpoint that receives data from external systems when specific events occur — a new support ticket, a form submission, a CRM update, or a payment event. In JieGou, webhooks serve as triggers that automatically start AI workflows in response to real-world events, enabling real-time automation without manual intervention or polling.

Read full definition →

Approval Gate

An approval gate is a step in an AI workflow that pauses execution and requires one or more designated humans to review the current output and approve, reject, or modify it before the workflow continues. Approval gates insert human judgment at critical decision points — before sending a customer email, publishing content, or executing a financial transaction.

Read full definition →

DAG Execution

DAG execution is a workflow execution mode where steps run concurrently based on their dependency relationships rather than sequentially. Steps that don't depend on each other run in parallel, while dependent steps wait for their inputs. This dramatically reduces end-to-end execution time for complex workflows with independent branches.

Read full definition →

Structured Output

Structured output is the practice of having an LLM return data in a predefined schema — typed fields with specific names, types, and validation rules — rather than free-form natural language. This makes AI output machine-readable and suitable for downstream automation: feeding into databases, triggering conditional logic, populating dashboards, or passing to the next workflow step.

Read full definition →

Token

A token is the fundamental unit of text that large language models process. In English, one token is roughly 3/4 of a word (100 tokens ≈ 75 words). LLM providers charge based on token consumption: input tokens (your prompt and context) plus output tokens (the model's response). Understanding tokens is essential for managing AI costs and staying within model context windows.

Read full definition →

AI Automation

AI automation is the use of large language models and AI agents to execute business tasks and multi-step workflows that traditionally required human judgment — analyzing data, generating content, making decisions, and taking actions. Unlike traditional rule-based automation (if-then logic), AI automation handles ambiguity, understands context, and produces human-quality outputs.

Read full definition →

EU AI Act

The EU AI Act is the European Union's comprehensive regulatory framework for artificial intelligence, adopted in 2024. It classifies AI systems into risk tiers (unacceptable, high, limited, minimal) and imposes requirements including transparency, human oversight, technical documentation, and conformity assessments. For organizations deploying AI automation, the EU AI Act creates compliance obligations around how AI systems are designed, deployed, and monitored.

Read full definition →

NIST AI RMF

The NIST AI Risk Management Framework (AI RMF) is a voluntary framework published by the U.S. National Institute of Standards and Technology for identifying, assessing, and mitigating risks in AI systems. It is organized around four core functions: Govern (policies and oversight), Map (context and risk identification), Measure (risk assessment and tracking), and Manage (risk treatment and response).

Read full definition →

Data Residency

Data residency is the requirement that data be stored and processed within specific geographic boundaries. For AI automation, data residency controls determine where prompts, inputs, outputs, and audit logs are stored — critical for organizations subject to GDPR (EU), HIPAA (US healthcare), PDPA (Asia-Pacific), or industry-specific regulations that mandate data remain within certain jurisdictions.

Read full definition →

A2A Protocol

The Agent-to-Agent (A2A) protocol is an open standard for inter-agent communication that allows AI agents built on different platforms to discover each other, negotiate capabilities, delegate tasks, and share results. A2A enables a multi-vendor agent ecosystem where a JieGou workflow can delegate a sub-task to an external agent and receive structured results back.

Read full definition →

Convergence Loop

A convergence loop is a quality control mechanism in AI workflows that links an eval step (quality gate) back to an upstream step. When the eval scores output below a configurable quality threshold, the workflow automatically re-executes the upstream steps with feedback from the eval, iterating until the output meets the quality bar or a maximum iteration count is reached.

Read full definition →

Multi-Channel AI

Multi-channel AI is the deployment of AI-powered automation across multiple communication channels from a single platform. Instead of building separate AI integrations for each messaging platform, multi-channel AI lets you create one workflow that serves customers on LINE, WhatsApp, Instagram, Messenger, email, web chat, and more — with unified inbox management and consistent AI quality across all channels.

Read full definition →

AI Readiness

AI readiness is a measure of an organization's preparedness to adopt, deploy, and sustain AI automation. It encompasses technical infrastructure (data quality, API integrations, security posture), organizational factors (leadership buy-in, AI literacy, change management), and governance maturity (policies, compliance frameworks, audit capabilities). Organizations with high AI readiness can deploy AI automation faster and with lower risk.

Read full definition →

AES-256-GCM Encryption

AES-256-GCM (Advanced Encryption Standard with 256-bit keys in Galois/Counter Mode) is an authenticated encryption algorithm that provides both data confidentiality and integrity verification. In JieGou, AES-256-GCM encrypts all BYOK API keys at rest — even if the database is compromised, API keys remain protected by encryption that would take billions of years to break with current computing power.

Read full definition →

Cron Schedule

A cron schedule is a time-based trigger that automatically runs AI workflows at specified intervals using cron expression syntax (e.g., "every Monday at 9am", "daily at midnight", "every 4 hours"). In JieGou, cron schedules enable unattended automation — workflows that run on their own without anyone remembering to start them.

Read full definition →

Hallucination

AI hallucination is when a large language model generates information that sounds confident and plausible but is factually incorrect, fabricated, or unsupported by the input data. Hallucinations are a fundamental challenge in AI automation because automated workflows can propagate false information downstream without human review.

Read full definition →

Knowledge Base

A knowledge base is a managed collection of documents (PDF, DOCX, Markdown, HTML, URLs) that serve as context for AI recipes and workflows. When a recipe runs, relevant passages are retrieved from the knowledge base and included in the prompt, grounding the AI's response in your organization's actual data rather than the model's general training.

Read full definition →

Put these concepts into practice

Start building AI recipes and workflows today.