Skip to content

Your data never leaves
your infrastructure.

Run JieGou with open source LLMs entirely on-premises. One Docker Compose command. Certified models. Full workflow automation. Zero cloud dependencies.

Architecture

Everything runs inside your network

No external API calls. No data egress. Your browser connects to your JieGou instance, which talks to your LLM server and your Redis cache — all within your infrastructure.

    Your Network Perimeter
    ┌─────────────────────────────────────────────────────────┐
    │                                                         │
    │   ┌──────────┐     ┌──────────────────┐                │
    │   │ Browser  │────▶│  JieGou Console  │                │
    │   │          │:3000│  (SvelteKit App)  │                │
    │   └──────────┘     └────────┬─────────┘                │
    │                        │         │                      │
    │                 Redis  │         │  OpenAI-compatible   │
    │                 :6379  │         │  API calls           │
    │                        │         │                      │
    │                  ┌─────▼──┐  ┌──▼──────────┐           │
    │                  │ Redis  │  │   Ollama /   │           │
    │                  │(cache) │  │  vLLM / any  │           │
    │                  └────────┘  │  local LLM   │           │
    │                              └──────────────┘           │
    │                                                         │
    └─────────────────────────────────────────────────────────┘
           No data leaves this boundary.

Self-Hosted Deployment

One command. Fully on-premises AI automation.

JieGou's Docker Compose starter kit bundles the console, Ollama for local LLM inference, and Redis into a single `docker compose up` command. No cloud dependencies. No data egress. Your AI workflows run entirely within your network perimeter.

  • Docker Compose starter with JieGou + Ollama + Redis
  • Auto-detects co-located Ollama and configures endpoints
  • Pull certified models (Llama 3.3, DeepSeek V3, Qwen 2.5, Mistral Large) in seconds
  • GPU acceleration with NVIDIA Container Toolkit overlay

Open Source LLMs

Run the best open source models. No API keys required.

Use Llama, DeepSeek, Qwen, Mistral, and any OpenAI-compatible model with JieGou's full recipe and workflow system. Our certified model registry tests each model against real JieGou workflows so you know which models work for which tasks. Community models are supported too — run a Bakeoff to compare quality.

  • Certified models tested against JieGou's recipe and workflow system
  • vLLM, Ollama, SGLang, LocalAI — any OpenAI-compatible endpoint
  • Bakeoff system: compare open source vs. proprietary models side-by-side
  • Model download manager with progress tracking for Ollama endpoints

Compliance Ready

Built for regulated industries

Healthcare, financial services, government, defense — when regulations require data to stay within your infrastructure, JieGou delivers. Combine air-gapped deployment with hybrid VPC agents and data residency controls. Compliance presets for HIPAA, SOX, GDPR, and FedRAMP-ready environments make configuration straightforward.

  • Data never leaves your infrastructure — zero cloud dependency
  • Compliance presets for HIPAA, SOX, GDPR, FedRAMP
  • Field-level data residency controls per department
  • Full audit trail for every AI execution and approval

Cost Advantage

Eliminate per-token costs. Pay once for hardware.

Running open source models on your own GPUs eliminates variable per-token costs. For high-volume workflows (support ticket triage, document processing, content generation), self-hosted inference can reduce LLM costs by 80-95%. JieGou's platform orchestration fee stays the same — you just stop paying per token.

  • 80-95% cost reduction for high-volume AI workflows
  • Predictable infrastructure costs vs. variable API pricing
  • No per-token charges — run unlimited inference on your hardware
  • JieGou Bakeoff compares quality + cost across deployment options

Comparison

Self-hosted capabilities compared

See how JieGou's air-gapped deployment compares to other AI automation platforms.

Feature JieGou n8n Zapier Copilot Studio
Self-hosted deployment
Open source LLM support
Certified model registry
AI quality bakeoffs
Department-first workflows
Hybrid VPC deployment
Data residency controls
Air-gapped (no internet)
Multi-agent orchestration
Approval workflows
MCP tool integration
Browser automation

Quick Start

Up and running in 5 minutes

1

Clone and configure

git clone https://github.com/JieGouAI/orion.git
cd orion/console/self-hosted-starter
cp .env.example .env
2

Start the services

docker compose up -d
3

Pull a model and go

./models/pull-models.sh llama3.3
# Open http://localhost:3000

JieGou auto-detects the local Ollama instance. Start building recipes and workflows immediately.

Ready to run AI automation on your infrastructure?

Deploy JieGou with open source LLMs today. Self-hosted starter kit included. Enterprise plan for hybrid VPC deployment and compliance controls.