Skip to content

JieGou is now a managed AI operations company.

You're looking at a page from when we sold a platform. We pivoted to managed services — we run marketing, customer engagement, and back-office ops on your behalf, in 17 industries. The capability below is still real; it's now part of how we deliver, not what you operate.

← All Glossary Terms

Token

Definition

A token is the fundamental unit of text that large language models process. In English, one token is roughly 3/4 of a word (100 tokens ≈ 75 words). LLM providers charge based on token consumption: input tokens (your prompt and context) plus output tokens (the model's response). Understanding tokens is essential for managing AI costs and staying within model context windows.

Token Tracking in JieGou

JieGou tracks input and output tokens for every recipe run and workflow execution. This data feeds into per-recipe, per-workflow, and per-department cost dashboards. With BYOK, tokens are billed directly to your provider account at their standard rates.

Context Windows

Each LLM has a maximum context window — the total number of tokens it can process in one request (prompt + response). Claude supports up to 200K tokens, GPT-4 up to 128K. JieGou's RAG system is designed to stay within these limits by retrieving only the most relevant document chunks.

See it in action

Start building AI automations with recipes and workflows today.