When we launched JieGou, we supported three cloud LLM providers: Anthropic, OpenAI, and Google. Then we added OpenAI-compatible endpoints for self-hosted and third-party models. Today, we’re adding five more first-class providers — bringing the total to nine.
The new providers
Mistral AI
Mistral’s models are strong at multilingual tasks, code generation, and structured output. JieGou now supports Mistral Large, Medium, and Small as first-class providers with full BYOK, circuit breaker, and key validation support.
- Best for: Multilingual content, European data residency requirements, cost-effective code generation
- Models: Mistral Large (latest), Mistral Medium, Mistral Small
Groq
Groq’s custom LPU hardware delivers inference speeds that are 10-20x faster than traditional GPU providers. When latency matters more than model size, Groq is the clear choice.
- Best for: Real-time applications, high-volume batch processing, latency-sensitive workflows
- Models: All Groq-hosted models via their OpenAI-compatible API
xAI (Grok)
xAI’s Grok models bring unique capabilities in reasoning and real-time information synthesis. Available as a first-class provider with the same integration depth as Anthropic and OpenAI.
- Best for: Reasoning-heavy tasks, real-time analysis, creative generation
- Models: Grok models via xAI’s API
AWS Bedrock
For organizations already running on AWS, Bedrock provides access to multiple foundation models through a single AWS endpoint. JieGou handles the SigV4 request signing — you just provide your AWS credentials.
- Best for: AWS-native organizations, regulated industries requiring AWS VPC data boundaries, teams using IAM-based access control
- Models: All Bedrock Converse API models (Claude, Titan, Llama, Mistral via AWS)
Azure OpenAI
Enterprise Azure customers can now route JieGou through their Azure OpenAI deployments. This keeps all LLM traffic within your Azure tenant and uses your existing Azure AD authentication and network controls.
- Best for: Azure-first enterprises, organizations with Azure compliance requirements, teams using Azure Private Link
- Models: All Azure OpenAI deployment models (GPT-5, o3, custom fine-tuned)
Every provider is a first-class citizen
All nine providers share the same integration depth:
| Capability | All 9 Providers |
|---|---|
| BYOK encryption | AES-256-GCM with per-account keys |
| Per-provider circuit breaker | Auto-opens on repeated failures, resets after cooldown |
| Key validation | Pre-flight check against the provider’s model-listing endpoint |
| Per-step selection | Choose a different provider and model for each workflow step |
| Token tracking | Unified usage dashboard across all providers |
| Cost estimation | Pre-execution cost estimates using provider-specific pricing |
| Streaming | Real-time streaming for all supported models |
| Tool calling | MCP and custom tools work with all providers |
Per-step model selection
The real power of multi-provider support is mixing models within a single workflow:
- Step 1 (Research): Use Groq for fast initial data gathering
- Step 2 (Analysis): Use Claude Opus for deep reasoning
- Step 3 (Translation): Use Mistral for multilingual output
- Step 4 (Code generation): Use GPT-5 for structured code output
- Step 5 (Review): Use Grok for rapid quality checks
Each step can use a different provider without any additional configuration. Input mappings, output schemas, and tool calling work identically across providers.
Provider-specific circuit breakers
Different providers have different reliability profiles. We’ve tuned circuit breaker thresholds per provider:
| Provider | Failure Threshold | Reset Time |
|---|---|---|
| Anthropic, OpenAI, Google, Mistral, xAI | 5 failures | 30 seconds |
| Groq | 3 failures | 60 seconds |
| AWS Bedrock | 3 failures | 45 seconds |
| Azure OpenAI | 5 failures | 30 seconds |
Groq and Bedrock have lower thresholds because they have more aggressive rate limiting. The circuit breaker is fail-open — if Redis is unavailable, requests proceed normally.
Plan availability
| Provider | Starter | Pro | Team | Enterprise |
|---|---|---|---|---|
| Anthropic, OpenAI, Google | Yes | Yes | Yes | Yes |
| Mistral, Groq, xAI | — | Yes | Yes | Yes |
| OpenAI-compatible | — | Yes | Yes | Yes |
| AWS Bedrock, Azure OpenAI | — | — | Yes | Yes |
| Ollama, vLLM (local) | — | — | — | Yes |
Getting started
- Go to Account Settings > API Keys
- Click Add Key and select your provider
- Enter your API key — it’s encrypted immediately with AES-256-GCM
- The key is validated against the provider’s endpoint
- Start using the provider in any recipe or workflow step
All five new providers are available today on Pro plans and above. Bedrock and Azure OpenAI are available on Team and Enterprise.
Add a provider key and start mixing models.