KI-Automatisierung Glossar
Verständliche Definitionen der Schlüsselkonzepte hinter KI-gestützter Workflow-Automatisierung.
KI-Rezepte
Ein KI-Rezept ist eine wiederverwendbare Einzelaufgaben-KI-Aktion mit einem strukturierten Prompt, einem typisierten Eingabe-Schema und einem typisierten Ausgabe-Schema. Rezepte sind die grundlegenden Bausteine der KI-Automatisierung in JieGou – jedes Rezept führt eine klar definierte Aufgabe aus, wie die Bewertung eines Leads, das Verfassen einer E-Mail oder die Extraktion von Rechnungsdaten.
Vollständige Definition lesen →
KI-Workflows
Ein KI-Workflow ist eine mehrstufige Automatisierung, die KI-Rezepte mit Kontrollflusslogik verkettet – Bedingungen, Schleifen, parallele Verzweigungen, manuelle Genehmigungsschritte, LLM-Prompts, Evaluierungs-Qualitätsschritte, Router und Aggregatoren. Workflows unterstützen sequenzielle oder DAG-Ausführung und verwandeln isolierte KI-Aufgaben in Ende-zu-Ende-Geschäftsprozesse, die mit minimalem menschlichem Eingriff ablaufen.
Vollständige Definition lesen →
BYOK (Bring Your Own Key)
BYOK (Bring Your Own Key) ist ein Modell, bei dem Sie Ihre eigenen LLM-Anbieter-API-Schlüssel – von Anthropic, OpenAI oder Google – mit einer Automatisierungsplattform verbinden. Sie zahlen direkt an den Anbieter statt an die Plattform. Ihre Prompts und Antworten fließen zwischen Ihnen und dem Anbieter – die Plattform sieht Ihre Daten nicht.
Vollständige Definition lesen →
MCP (Model Context Protocol)
MCP (Model Context Protocol) ist ein offenes Protokoll, das standardisiert, wie KI-Modelle mit externen Tools und Diensten interagieren. Statt für jedes Tool eine benutzerdefinierte Integration zu bauen, bietet MCP eine universelle Schnittstelle: Tools legen ihre Fähigkeiten offen, KI-Modelle rufen sie über strukturierte Anfragen und Antworten auf. JieGou nutzt MCP für Browser-Automatisierung und externe Tool-Integration.
Vollständige Definition lesen →
Department Packs
A department pack is a curated bundle of AI recipes, multi-step workflows, suggested schedules, and governance settings designed for a specific business function. JieGou offers 20 department packs covering Sales, Marketing, Support, HR, Finance, Legal, IT & Security, Operations, and more. Installing a pack gives a team production-ready AI automation in minutes.
Vollständige Definition lesen →
AI Governance
AI governance encompasses the policies, technical controls, organizational processes, and oversight mechanisms that ensure AI systems operate safely, transparently, and within regulatory boundaries. In the context of AI automation platforms, governance includes access control (who can build and run AI), tool approval gates (which external services AI can access), audit logging (what AI did and when), cost controls (budget limits per department), and compliance alignment (mapping AI operations to frameworks like EU AI Act, NIST AI RMF, and ISO 42001).
Vollständige Definition lesen →
Prompt Template
A prompt template is a reusable set of instructions for a large language model with placeholder variables that are filled at runtime with specific data. Unlike one-off prompts typed into a chat window, templates are versioned, tested, and optimized over time. In JieGou, every AI recipe is built on a prompt template with typed input schemas, output schemas, and optional knowledge base context.
Vollständige Definition lesen →
Large Language Model (LLM)
A large language model (LLM) is a neural network trained on massive text corpora that can understand instructions, reason about data, and generate human-quality text. LLMs power the AI capabilities in platforms like JieGou — when a recipe runs, it sends a structured prompt to an LLM (such as Claude, GPT, or Gemini) and receives a text response that is parsed into structured output fields.
Vollständige Definition lesen →
AI Agent
An AI agent is an AI system that goes beyond generating text to autonomously reasoning about tasks, making decisions, using external tools, and taking actions to achieve goals. Unlike a chatbot that responds to individual prompts, an agent can break down complex goals into sub-tasks, call APIs, read and write data, and iterate until the objective is complete.
Vollständige Definition lesen →
RAG (Retrieval-Augmented Generation)
Retrieval-Augmented Generation (RAG) is a technique that enhances LLM outputs by retrieving relevant documents from a knowledge base and including them as context in the prompt. Instead of relying solely on the model's training data, RAG grounds responses in your organization's specific documents, policies, and data — reducing hallucination and increasing accuracy.
Vollständige Definition lesen →
RBAC (Role-Based Access Control)
Role-Based Access Control (RBAC) is a security model that assigns permissions to predefined roles (like Owner, Admin, Manager, Editor, Viewer) rather than individual users. Each role has a specific set of capabilities — who can create recipes, run workflows, approve actions, manage API keys, or view audit logs. RBAC ensures the principle of least privilege: users get exactly the access they need, nothing more.
Vollständige Definition lesen →
AI Bakeoff
An AI Bakeoff is a structured evaluation that compares multiple AI configurations — different LLM models, prompt variations, or workflow designs — on identical inputs using LLM-as-judge automated scoring. Bakeoffs produce ranked results with statistical confidence intervals, helping teams make data-driven decisions about which model or prompt to use in production.
Vollständige Definition lesen →
Webhook
A webhook is an HTTP endpoint that receives data from external systems when specific events occur — a new support ticket, a form submission, a CRM update, or a payment event. In JieGou, webhooks serve as triggers that automatically start AI workflows in response to real-world events, enabling real-time automation without manual intervention or polling.
Vollständige Definition lesen →
Approval Gate
An approval gate is a step in an AI workflow that pauses execution and requires one or more designated humans to review the current output and approve, reject, or modify it before the workflow continues. Approval gates insert human judgment at critical decision points — before sending a customer email, publishing content, or executing a financial transaction.
Vollständige Definition lesen →
DAG Execution
DAG execution is a workflow execution mode where steps run concurrently based on their dependency relationships rather than sequentially. Steps that don't depend on each other run in parallel, while dependent steps wait for their inputs. This dramatically reduces end-to-end execution time for complex workflows with independent branches.
Vollständige Definition lesen →
Structured Output
Structured output is the practice of having an LLM return data in a predefined schema — typed fields with specific names, types, and validation rules — rather than free-form natural language. This makes AI output machine-readable and suitable for downstream automation: feeding into databases, triggering conditional logic, populating dashboards, or passing to the next workflow step.
Vollständige Definition lesen →
Token
A token is the fundamental unit of text that large language models process. In English, one token is roughly 3/4 of a word (100 tokens ≈ 75 words). LLM providers charge based on token consumption: input tokens (your prompt and context) plus output tokens (the model's response). Understanding tokens is essential for managing AI costs and staying within model context windows.
Vollständige Definition lesen →
AI Automation
AI automation is the use of large language models and AI agents to execute business tasks and multi-step workflows that traditionally required human judgment — analyzing data, generating content, making decisions, and taking actions. Unlike traditional rule-based automation (if-then logic), AI automation handles ambiguity, understands context, and produces human-quality outputs.
Vollständige Definition lesen →
EU AI Act
The EU AI Act is the European Union's comprehensive regulatory framework for artificial intelligence, adopted in 2024. It classifies AI systems into risk tiers (unacceptable, high, limited, minimal) and imposes requirements including transparency, human oversight, technical documentation, and conformity assessments. For organizations deploying AI automation, the EU AI Act creates compliance obligations around how AI systems are designed, deployed, and monitored.
Vollständige Definition lesen →
NIST AI RMF
The NIST AI Risk Management Framework (AI RMF) is a voluntary framework published by the U.S. National Institute of Standards and Technology for identifying, assessing, and mitigating risks in AI systems. It is organized around four core functions: Govern (policies and oversight), Map (context and risk identification), Measure (risk assessment and tracking), and Manage (risk treatment and response).
Vollständige Definition lesen →
Data Residency
Data residency is the requirement that data be stored and processed within specific geographic boundaries. For AI automation, data residency controls determine where prompts, inputs, outputs, and audit logs are stored — critical for organizations subject to GDPR (EU), HIPAA (US healthcare), PDPA (Asia-Pacific), or industry-specific regulations that mandate data remain within certain jurisdictions.
Vollständige Definition lesen →
A2A Protocol
The Agent-to-Agent (A2A) protocol is an open standard for inter-agent communication that allows AI agents built on different platforms to discover each other, negotiate capabilities, delegate tasks, and share results. A2A enables a multi-vendor agent ecosystem where a JieGou workflow can delegate a sub-task to an external agent and receive structured results back.
Vollständige Definition lesen →
Convergence Loop
A convergence loop is a quality control mechanism in AI workflows that links an eval step (quality gate) back to an upstream step. When the eval scores output below a configurable quality threshold, the workflow automatically re-executes the upstream steps with feedback from the eval, iterating until the output meets the quality bar or a maximum iteration count is reached.
Vollständige Definition lesen →
Multi-Channel AI
Multi-channel AI is the deployment of AI-powered automation across multiple communication channels from a single platform. Instead of building separate AI integrations for each messaging platform, multi-channel AI lets you create one workflow that serves customers on LINE, WhatsApp, Instagram, Messenger, email, web chat, and more — with unified inbox management and consistent AI quality across all channels.
Vollständige Definition lesen →
AI Readiness
AI readiness is a measure of an organization's preparedness to adopt, deploy, and sustain AI automation. It encompasses technical infrastructure (data quality, API integrations, security posture), organizational factors (leadership buy-in, AI literacy, change management), and governance maturity (policies, compliance frameworks, audit capabilities). Organizations with high AI readiness can deploy AI automation faster and with lower risk.
Vollständige Definition lesen →
AES-256-GCM Encryption
AES-256-GCM (Advanced Encryption Standard with 256-bit keys in Galois/Counter Mode) is an authenticated encryption algorithm that provides both data confidentiality and integrity verification. In JieGou, AES-256-GCM encrypts all BYOK API keys at rest — even if the database is compromised, API keys remain protected by encryption that would take billions of years to break with current computing power.
Vollständige Definition lesen →
Cron Schedule
A cron schedule is a time-based trigger that automatically runs AI workflows at specified intervals using cron expression syntax (e.g., "every Monday at 9am", "daily at midnight", "every 4 hours"). In JieGou, cron schedules enable unattended automation — workflows that run on their own without anyone remembering to start them.
Vollständige Definition lesen →
Hallucination
AI hallucination is when a large language model generates information that sounds confident and plausible but is factually incorrect, fabricated, or unsupported by the input data. Hallucinations are a fundamental challenge in AI automation because automated workflows can propagate false information downstream without human review.
Vollständige Definition lesen →
Knowledge Base
A knowledge base is a managed collection of documents (PDF, DOCX, Markdown, HTML, URLs) that serve as context for AI recipes and workflows. When a recipe runs, relevant passages are retrieved from the knowledge base and included in the prompt, grounding the AI's response in your organization's actual data rather than the model's general training.
Vollständige Definition lesen →
Setzen Sie diese Konzepte in die Praxis um
Beginnen Sie jetzt mit dem Erstellen von KI-Rezepten und Workflows.