Skip to content
Product

From 9 to 10: Why We Added Tool Approval Gates to Our Governance Stack

JieGou's governance stack grew from 9 to 10 layers with the addition of tool approval gates. Here's what changed, why it matters, and how each layer protects your AI deployments.

JT
JieGou Team
· · 4 min read

The 10-Layer Stack Was Already the Deepest in the Category

When we shipped our governance stack last month, it was already the most comprehensive in the AI automation category. Ten layers covering identity, data protection, human oversight, compliance, and observability. No competitor had anything close.

But a 10-layer stack has a gap. You can control who runs a workflow. You can control what data the workflow accesses. You can require human approval before the workflow executes. What you could not control was which specific tools an agent invokes during execution.

The Gap: Tool-Level Governance

Consider a workflow that uses MCP servers for Slack, Gmail, and Salesforce. The workflow is approved to run. The agent has the right permissions. But should that agent be able to send an email via Gmail without explicit approval? Should it update a Salesforce record autonomously?

n8n recognized this gap and added human-in-the-loop at the individual tool call level. Their implementation lets individual users approve tool calls during execution. But it is user-controlled, not admin-controlled. There is no organizational policy layer.

We took a different approach.

Tool Approval Gates: The 10th Layer

Tool approval gates give administrators granular control over which tools require human approval before execution. This is not a workflow-level pause. It is a tool-level gate.

Here is how it works:

  1. Admin configures approval-required tools in the MCP ACL settings. Any tool in the account’s MCP server catalog can be flagged.
  2. During workflow execution, when the agent attempts to invoke a flagged tool, execution pauses.
  3. An approval request is created showing the tool name, the input parameters the agent wants to send, and the agent identity context.
  4. An approver reviews and decides — approve to proceed, deny to skip, or let the timeout auto-deny.
  5. Execution resumes with the decision recorded in the audit trail.

The key difference from n8n’s approach: approval policies are set by administrators, not by individual users. The approver can be configured as the agent’s sponsor, any user with a minimum role, or specific named users. Timeout and notification settings are organizational policy, not per-user preference.

The Complete 10-Layer Stack

Here is what each layer does:

#LayerWhat It Governs
1Role-Based Access ControlWho can do what (6 roles, 24 permissions, department scoping)
2Agent IdentityPer-agent permissions, rate limits, and audit context propagation
3Audit Logging280+ action types with immutable, exportable evidence
4PII Detection + TokenizationAutomatic detection and reversible tokenization of sensitive data
5Graduated Autonomy4 trust levels (manual to full auto) with policy-driven escalation
6Tool Approval GatesAdmin-controlled per-tool approval before execution
7Data Residency ControlsHIPAA, GDPR, PCI-DSS, SOX, FedRAMP compliance presets
8Envelope Key EncryptionAES-256-GCM with HKDF-SHA256 key derivation for BYOK
9Compliance Dashboard + SOC 221 policies, 17 TSC controls, evidence export, Vanta integration
10EU AI Act Compliance Engine10-article mapping (Art. 9-17, 52), risk classification, evidence generation
10Governance Readiness AssessmentSelf-serve scoring across all governance layers with recommendations

Each layer operates independently but integrates with the others. Agent identity (layer 2) propagates through tool approval gates (layer 6) so approvers see exactly which agent is requesting tool access. Audit logging (layer 3) records every approval decision. The compliance dashboard (layer 9) tracks tool approval compliance across the organization.

Why Depth Matters More Than Breadth

Microsoft, OpenAI, and Anthropic are all investing in governance. Microsoft’s Agent Framework adds basic role assignment. OpenAI Frontier has per-agent identity. Anthropic offers enterprise admin controls.

These are important first steps. But governance depth is measured in layers, not features. A platform with role-based access but no PII detection has a governance gap. A platform with audit logging but no tool-level controls has a governance gap. A platform with compliance presets but no EU AI Act mapping has a governance gap.

10 layers means 10 potential failure points are covered. Not because we chose 10 as a number, but because enterprise AI deployments have at least 10 distinct governance requirements.

What Is Next

The governance arms race is accelerating. Every major platform is shipping governance features. The question is no longer whether to govern AI agents, but how deeply.

We will keep adding layers as new governance requirements emerge. The stack is designed to grow.

Explore the governance stack | Try the Governance Readiness Assessment

governance tool-approval enterprise compliance ai-agents
Share this article

Enjoyed this post?

Get workflow tips, product updates, and automation guides in your inbox.

No spam. Unsubscribe anytime.