Model Freedom.
Any model, any provider, any infrastructure.
9 cloud providers, 4 certified open-source models, any OpenAI-compatible endpoint, and self-hosted Ollama/vLLM — all from one platform. Pick the best model for every task, compare them head-to-head in bakeoffs, and never get locked into a single provider. Zapier gives you 3 providers. JieGou gives you model freedom.
Cloud Providers
Three cloud providers, hundreds of models
Access every major model family from a single platform. Claude for nuanced reasoning, GPT-5 for broad general tasks, Gemini for multimodal analysis. Bring your own API keys and pay your provider directly, or use platform-provided keys to get started instantly.
- Anthropic — Claude Sonnet 4.6, Haiku 4.5, Opus 4.6 with extended thinking
- OpenAI — GPT-5.2, GPT-5-mini, GPT-5-nano, o3, o4-mini with reasoning
- Google — Gemini 3.1 Pro, Gemini 3 Flash, Gemini 2.5 Pro/Flash
- Web search available on select models across all three providers
Open Source Models
Run Llama, DeepSeek, Qwen, and Mistral locally
Connect any OpenAI-compatible endpoint — Ollama, vLLM, LocalAI, Together AI, Groq, or your own fine-tuned model. JieGou auto-discovers local inference servers and lists available models. Four models are certified by JieGou with tested tool calling and structured output.
- Llama 4 Maverick — 400B+ MoE with tool calling, vision, and 1M-token context
- DeepSeek V3.2 — 671B MoE with strong reasoning and 128K context
- Qwen 3 235B — Multilingual MoE with structured output and 128K context
- Mistral 3 Large — 123B with tool calling, vision, and 128K context
Per-Step Selection
The right model for every task in your workflow
Different tasks need different models. Use Claude Opus for deep analysis, GPT-5-nano for fast classification, and Llama 4 for high-volume extraction — all in the same workflow. JieGou's model recommendation engine learns from your execution history, scoring each model on success rate, cost efficiency, and speed.
- Choose a different model for every recipe, workflow step, and conversation
- Recommendation engine scores models: success rate (50%) + cost efficiency (30%) + speed (20%)
- Cost estimation from historical runs — see projected spend before execution
- Run bakeoffs to compare any two models head-to-head with statistical confidence
BYOK Security
Zero-knowledge key management with enterprise resilience
Your API keys are encrypted with AES-256-GCM using per-account derived keys via HKDF-SHA256. Keys are decrypted only in-memory during execution and never stored in plaintext. Per-provider circuit breakers detect outages and fail gracefully, while priority-based concurrency limits prevent runaway usage.
- AES-256-GCM envelope encryption with HKDF-SHA256 per-account key derivation
- Circuit breakers per provider — 5 errors in 60s trips open, auto-recovers in 30s
- Priority-based concurrency limits — Enterprise gets 100% of global capacity
- Key validation on save, auto-invalidation on 401/403, and rotation support
Supported Models
Cloud providers, certified open source, and any OpenAI-compatible endpoint
Three cloud providers with BYOK. Four certified open-source models on vLLM. Plus any model behind an OpenAI-compatible API.
Anthropic
Claude Sonnet 4.6, Haiku 4.5, Opus 4.6
OpenAI
GPT-5.2, GPT-5-mini, o3, o4-mini
Gemini 3.1 Pro, 3 Flash, 2.5 Pro
Mistral AI
Large, Medium, Small
Groq
Ultra-fast LPU inference
xAI
Grok models
AWS Bedrock
Converse API (SigV4)
Azure OpenAI
Deployment-based endpoints
Llama 4 Maverick
400B+ MoE — vLLM certified
DeepSeek V3.2
671B MoE — vLLM certified
Qwen 3 235B
235B MoE — vLLM certified
Mistral 3 Large
123B dense — vLLM certified
OpenAI-Compatible
Any custom endpoint
How It Works
Three steps to model freedom
Bring your keys
Add API keys for Anthropic, OpenAI, or Google — or enter a custom OpenAI-compatible base URL for self-hosted models. Keys are encrypted with AES-256-GCM before storage.
Pick per task
Select a model for each recipe, workflow step, or conversation. The recommendation engine suggests the best model based on your execution history.
Compare and optimize
Run bakeoffs to compare any two models head-to-head. Track costs per provider and per recipe. Swap models without changing a single prompt.
Plans
Multi-provider support on every plan
BYOK and multi-provider model access are available on all plans. Open-source model support and advanced features scale with your tier.
Starter (Free)
- Anthropic, OpenAI, Google providers
- BYOK or platform-provided keys
- Per-recipe model selection
Pro / Team
- Everything in Starter
- OpenAI-compatible custom endpoints
- Model recommendation engine
- Recipe & model bakeoffs
Enterprise
- Everything in Pro / Team
- Ollama/vLLM auto-discovery
- Certified model registry
- Priority concurrency limits and key rotation
Outage Resilience
Business Continuity Built In
On March 2, 2026, a global Anthropic outage halted every single-provider AI deployment. JieGou customers with multi-provider configurations kept running. Provider diversity isn't just flexibility — it's insurance.
Single-Provider Deployment
When your only provider goes down, every workflow stops. No fallback. No recovery. Complete operational halt until the provider recovers.
Multi-Provider with JieGou BYOM
When one provider goes down, workflows automatically continue on available providers. Circuit breakers detect the outage. Your operations never stop.
On March 2, 2026, JieGou customers with multi-provider BYOM configurations experienced zero downtime during the global Anthropic outage. Single-provider deployments were fully halted.
Read the full analysisModel freedom starts here
Connect your API keys, discover local models, and let JieGou recommend the best model for every task. 9 providers, any OpenAI-compatible endpoint, and self-hosted Ollama. Start free today.