You can’t improve what you can’t measure. That’s why JieGou introduced GovernanceScore — a single number from 0 to 100 that tells you exactly how mature your AI governance posture is, and precisely what to do next to improve it.
GovernanceScore isn’t a vanity metric. It’s a quantitative assessment across eight factors that matter for AI safety, compliance, and operational excellence. Each factor contributes to the total score, and each can be improved independently.
Here’s how it works.
The 8 factors
1. RBAC coverage (0–15 points)
This measures how thoroughly you’ve implemented role-based access control across your organization. Are all team members assigned appropriate roles? Are permissions aligned with actual job functions? Do you have role coverage across all departments using JieGou?
How to improve: Ensure every team member has a role assigned. Review permissions quarterly. Use the Manager role for department leads rather than giving everyone Admin access.
2. Approval gate adoption (0–15 points)
Approval gates prevent unreviewed AI actions from reaching production. This factor measures what percentage of your sensitive workflows include approval steps, and whether approvers are actively reviewing (not just auto-approving).
How to improve: Identify workflows that touch external systems, financial data, or customer communications. Add approval steps before those critical actions. Aim for approval gates on at least 80% of workflows classified as sensitive.
3. Audit log completeness (0–10 points)
Having audit logging enabled is the baseline. This factor assesses whether your logs are complete — covering all actions, all users, all workflow executions — and whether they’re being retained for an adequate period.
How to improve: JieGou enables audit logging by default, so most organizations start with a solid score here. Ensure you haven’t disabled logging on any workflows, and that your retention settings meet your compliance requirements.
4. BYOK encryption (0–15 points)
Bring Your Own Key (BYOK) means your LLM API keys are encrypted with keys you control, not stored in plaintext or encrypted with platform-managed keys. This factor measures whether you’re using BYOK and whether keys are rotated regularly.
How to improve: Enable BYOK for all LLM providers. Set up key rotation on a quarterly schedule. Use separate API keys for different departments if your organization requires data isolation.
5. MCP certification (0–10 points)
MCP (Model Context Protocol) integrations connect your AI workflows to external services. This factor assesses whether you’re using certified, OAuth-based integrations versus uncertified or manual connections.
How to improve: Prefer JieGou’s built-in OAuth integrations over manual API key connections. Review your integration inventory and migrate any uncertified connections to certified alternatives.
6. Model diversity (0–10 points)
Relying on a single AI model creates concentration risk. This factor measures whether you’re using models from multiple providers (Anthropic, OpenAI, Google) and whether you have fallback configurations.
How to improve: Configure at least two LLM providers. Set up fallback models so workflows continue if one provider experiences downtime. Use different models for different use cases based on their strengths.
7. Cost transparency (0–10 points)
AI costs can spiral without visibility. This factor evaluates whether you have cost tracking, budgets, and alerts configured — and whether team members can see the cost impact of their workflows.
How to improve: Enable per-workflow cost tracking. Set department-level budgets with alerts at 80% and 100% thresholds. Review cost reports monthly and optimize expensive workflows.
8. Memory governance (0–15 points)
AI memory — the context and data retained between sessions — needs governance too. This factor measures whether you have policies for what data is stored, how long it’s retained, and who can access historical context.
How to improve: Configure memory retention policies by department. Set appropriate TTLs for different data sensitivity levels. Ensure PII handling follows your organization’s data governance policies.
From 40 to 80+: a practical improvement path
A typical organization that signs up and starts using JieGou immediately scores around 35–45. Audit logging is on by default, basic RBAC is in place from the invite flow, and some integrations are certified. That’s a solid start.
Here’s the fastest path to 80+:
Week 1 (40 → 55): Enable BYOK encryption for your primary LLM provider. This alone adds up to 15 points and takes about five minutes.
Week 2 (55 → 65): Add approval gates to your top 5 most sensitive workflows. Review your RBAC assignments and ensure every team member has the right role — not just “Admin for everyone.”
Week 3 (65 → 75): Configure a second LLM provider as a fallback. Set up cost tracking and department budgets. Migrate any manual integrations to OAuth-certified alternatives.
Week 4 (75 → 80+): Configure memory retention policies. Review audit log retention settings. Do a final RBAC review to ensure department isolation is properly configured.
Why GovernanceScore matters
GovernanceScore gives you three things:
A baseline. Before you can improve governance, you need to know where you stand. A single number makes it easy to communicate to leadership and track over time.
A roadmap. Each factor tells you exactly what to work on next. Instead of vague recommendations, you get specific, actionable improvements ranked by impact.
Proof of progress. When your CISO asks “how’s our AI governance?” you can answer with a number, a trend line, and a breakdown by factor. That’s a conversation that takes two minutes instead of two hours.
AI governance isn’t a destination — it’s a practice. GovernanceScore makes that practice measurable.