Token
Definition
A token is the fundamental unit of text that large language models process. In English, one token is roughly 3/4 of a word (100 tokens ≈ 75 words). LLM providers charge based on token consumption: input tokens (your prompt and context) plus output tokens (the model's response). Understanding tokens is essential for managing AI costs and staying within model context windows.
Token Tracking in JieGou
JieGou tracks input and output tokens for every recipe run and workflow execution. This data feeds into per-recipe, per-workflow, and per-department cost dashboards. With BYOK, tokens are billed directly to your provider account at their standard rates.
Context Windows
Each LLM has a maximum context window — the total number of tokens it can process in one request (prompt + response). Claude supports up to 200K tokens, GPT-4 up to 128K. JieGou's RAG system is designed to stay within these limits by retrieving only the most relevant document chunks.
Related Terms
AI Recipes
Learn what AI recipes are and how they work in JieGou. Recipes are reusable, single-operation AI building blocks with structured inputs and outputs.
BYOK (Bring Your Own Key)
Learn what BYOK means for AI automation. Bring Your Own Key lets you connect your own LLM API keys to JieGou for full cost control and data privacy.
Large Language Model (LLM)
A large language model (LLM) is an AI system trained on text data that can understand and generate human language, powering tasks like writing, analysis, and reasoning.