Skip to content
Product

9 LLM Providers, One Platform: Mistral, Groq, xAI, Bedrock, and Azure Join JieGou

JieGou now supports 9 LLM providers out of the box — from ultra-fast Groq inference to enterprise-grade AWS Bedrock and Azure OpenAI. Pick the right model for every step.

JT
JieGou Team
· · 4 min read

When we launched JieGou, we supported three cloud LLM providers: Anthropic, OpenAI, and Google. Then we added OpenAI-compatible endpoints for self-hosted and third-party models. Today, we’re adding five more first-class providers — bringing the total to nine.

The new providers

Mistral AI

Mistral’s models are strong at multilingual tasks, code generation, and structured output. JieGou now supports Mistral Large, Medium, and Small as first-class providers with full BYOK, circuit breaker, and key validation support.

  • Best for: Multilingual content, European data residency requirements, cost-effective code generation
  • Models: Mistral Large (latest), Mistral Medium, Mistral Small

Groq

Groq’s custom LPU hardware delivers inference speeds that are 10-20x faster than traditional GPU providers. When latency matters more than model size, Groq is the clear choice.

  • Best for: Real-time applications, high-volume batch processing, latency-sensitive workflows
  • Models: All Groq-hosted models via their OpenAI-compatible API

xAI (Grok)

xAI’s Grok models bring unique capabilities in reasoning and real-time information synthesis. Available as a first-class provider with the same integration depth as Anthropic and OpenAI.

  • Best for: Reasoning-heavy tasks, real-time analysis, creative generation
  • Models: Grok models via xAI’s API

AWS Bedrock

For organizations already running on AWS, Bedrock provides access to multiple foundation models through a single AWS endpoint. JieGou handles the SigV4 request signing — you just provide your AWS credentials.

  • Best for: AWS-native organizations, regulated industries requiring AWS VPC data boundaries, teams using IAM-based access control
  • Models: All Bedrock Converse API models (Claude, Titan, Llama, Mistral via AWS)

Azure OpenAI

Enterprise Azure customers can now route JieGou through their Azure OpenAI deployments. This keeps all LLM traffic within your Azure tenant and uses your existing Azure AD authentication and network controls.

  • Best for: Azure-first enterprises, organizations with Azure compliance requirements, teams using Azure Private Link
  • Models: All Azure OpenAI deployment models (GPT-5, o3, custom fine-tuned)

Every provider is a first-class citizen

All nine providers share the same integration depth:

CapabilityAll 9 Providers
BYOK encryptionAES-256-GCM with per-account keys
Per-provider circuit breakerAuto-opens on repeated failures, resets after cooldown
Key validationPre-flight check against the provider’s model-listing endpoint
Per-step selectionChoose a different provider and model for each workflow step
Token trackingUnified usage dashboard across all providers
Cost estimationPre-execution cost estimates using provider-specific pricing
StreamingReal-time streaming for all supported models
Tool callingMCP and custom tools work with all providers

Per-step model selection

The real power of multi-provider support is mixing models within a single workflow:

  1. Step 1 (Research): Use Groq for fast initial data gathering
  2. Step 2 (Analysis): Use Claude Opus for deep reasoning
  3. Step 3 (Translation): Use Mistral for multilingual output
  4. Step 4 (Code generation): Use GPT-5 for structured code output
  5. Step 5 (Review): Use Grok for rapid quality checks

Each step can use a different provider without any additional configuration. Input mappings, output schemas, and tool calling work identically across providers.

Provider-specific circuit breakers

Different providers have different reliability profiles. We’ve tuned circuit breaker thresholds per provider:

ProviderFailure ThresholdReset Time
Anthropic, OpenAI, Google, Mistral, xAI5 failures30 seconds
Groq3 failures60 seconds
AWS Bedrock3 failures45 seconds
Azure OpenAI5 failures30 seconds

Groq and Bedrock have lower thresholds because they have more aggressive rate limiting. The circuit breaker is fail-open — if Redis is unavailable, requests proceed normally.

Plan availability

ProviderStarterProTeamEnterprise
Anthropic, OpenAI, GoogleYesYesYesYes
Mistral, Groq, xAIYesYesYes
OpenAI-compatibleYesYesYes
AWS Bedrock, Azure OpenAIYesYes
Ollama, vLLM (local)Yes

Getting started

  1. Go to Account Settings > API Keys
  2. Click Add Key and select your provider
  3. Enter your API key — it’s encrypted immediately with AES-256-GCM
  4. The key is validated against the provider’s endpoint
  5. Start using the provider in any recipe or workflow step

All five new providers are available today on Pro plans and above. Bedrock and Azure OpenAI are available on Team and Enterprise.

Add a provider key and start mixing models.

llm-providers mistral groq xai bedrock azure-openai multi-provider byok
Share this article

Enjoyed this post?

Get workflow tips, product updates, and automation guides in your inbox.

No spam. Unsubscribe anytime.