The 10-Layer Stack Was Already the Deepest in the Category
When we shipped our governance stack last month, it was already the most comprehensive in the AI automation category. Ten layers covering identity, data protection, human oversight, compliance, and observability. No competitor had anything close.
But a 10-layer stack has a gap. You can control who runs a workflow. You can control what data the workflow accesses. You can require human approval before the workflow executes. What you could not control was which specific tools an agent invokes during execution.
The Gap: Tool-Level Governance
Consider a workflow that uses MCP servers for Slack, Gmail, and Salesforce. The workflow is approved to run. The agent has the right permissions. But should that agent be able to send an email via Gmail without explicit approval? Should it update a Salesforce record autonomously?
n8n recognized this gap and added human-in-the-loop at the individual tool call level. Their implementation lets individual users approve tool calls during execution. But it is user-controlled, not admin-controlled. There is no organizational policy layer.
We took a different approach.
Tool Approval Gates: The 10th Layer
Tool approval gates give administrators granular control over which tools require human approval before execution. This is not a workflow-level pause. It is a tool-level gate.
Here is how it works:
- Admin configures approval-required tools in the MCP ACL settings. Any tool in the account’s MCP server catalog can be flagged.
- During workflow execution, when the agent attempts to invoke a flagged tool, execution pauses.
- An approval request is created showing the tool name, the input parameters the agent wants to send, and the agent identity context.
- An approver reviews and decides — approve to proceed, deny to skip, or let the timeout auto-deny.
- Execution resumes with the decision recorded in the audit trail.
The key difference from n8n’s approach: approval policies are set by administrators, not by individual users. The approver can be configured as the agent’s sponsor, any user with a minimum role, or specific named users. Timeout and notification settings are organizational policy, not per-user preference.
The Complete 10-Layer Stack
Here is what each layer does:
| # | Layer | What It Governs |
|---|---|---|
| 1 | Role-Based Access Control | Who can do what (6 roles, 24 permissions, department scoping) |
| 2 | Agent Identity | Per-agent permissions, rate limits, and audit context propagation |
| 3 | Audit Logging | 280+ action types with immutable, exportable evidence |
| 4 | PII Detection + Tokenization | Automatic detection and reversible tokenization of sensitive data |
| 5 | Graduated Autonomy | 4 trust levels (manual to full auto) with policy-driven escalation |
| 6 | Tool Approval Gates | Admin-controlled per-tool approval before execution |
| 7 | Data Residency Controls | HIPAA, GDPR, PCI-DSS, SOX, FedRAMP compliance presets |
| 8 | Envelope Key Encryption | AES-256-GCM with HKDF-SHA256 key derivation for BYOK |
| 9 | Compliance Dashboard + SOC 2 | 21 policies, 17 TSC controls, evidence export, Vanta integration |
| 10 | EU AI Act Compliance Engine | 10-article mapping (Art. 9-17, 52), risk classification, evidence generation |
| 10 | Governance Readiness Assessment | Self-serve scoring across all governance layers with recommendations |
Each layer operates independently but integrates with the others. Agent identity (layer 2) propagates through tool approval gates (layer 6) so approvers see exactly which agent is requesting tool access. Audit logging (layer 3) records every approval decision. The compliance dashboard (layer 9) tracks tool approval compliance across the organization.
Why Depth Matters More Than Breadth
Microsoft, OpenAI, and Anthropic are all investing in governance. Microsoft’s Agent Framework adds basic role assignment. OpenAI Frontier has per-agent identity. Anthropic offers enterprise admin controls.
These are important first steps. But governance depth is measured in layers, not features. A platform with role-based access but no PII detection has a governance gap. A platform with audit logging but no tool-level controls has a governance gap. A platform with compliance presets but no EU AI Act mapping has a governance gap.
10 layers means 10 potential failure points are covered. Not because we chose 10 as a number, but because enterprise AI deployments have at least 10 distinct governance requirements.
What Is Next
The governance arms race is accelerating. Every major platform is shipping governance features. The question is no longer whether to govern AI agents, but how deeply.
We will keep adding layers as new governance requirements emerge. The stack is designed to grow.
Explore the governance stack | Try the Governance Readiness Assessment