Hallucination
Definition
AI hallucination is when a large language model generates information that sounds confident and plausible but is factually incorrect, fabricated, or unsupported by the input data. Hallucinations are a fundamental challenge in AI automation because automated workflows can propagate false information downstream without human review.
Reducing Hallucination
JieGou reduces hallucination risk through multiple mechanisms: RAG (grounding responses in your actual documents), structured output schemas (constraining what the model can return), eval quality gates (scoring outputs before they proceed), convergence loops (iterating until quality thresholds are met), and approval gates (human review at critical decision points).
Related Terms
AI Governance
AI governance is the set of policies, controls, and oversight mechanisms that ensure AI systems operate safely, ethically, and in compliance with regulations.
RAG (Retrieval-Augmented Generation)
RAG retrieves relevant documents from a knowledge base and includes them as context when prompting an LLM, grounding AI responses in your organization's data.
Convergence Loop
A convergence loop automatically re-runs workflow steps when an eval gate scores output below a quality threshold, iterating until quality criteria are met.