The MCP Explosion
The Model Context Protocol was donated to the Linux Foundation’s AI and Data Foundation (AAIF) in late 2025, and adoption followed an exponential curve. Monthly downloads crossed 97 million in February 2026. The number of publicly available MCP servers surpassed 10,000. Every major AI platform added MCP support. Every integration vendor started publishing MCP servers alongside their REST APIs.
The protocol succeeded. The ecosystem is thriving. And that’s exactly the problem.
Ten thousand MCP servers is not ten thousand production-ready MCP servers. The barrier to publishing an MCP server is low — a few hundred lines of TypeScript, a tool definition, and a README. There’s no review process, no security audit, no quality bar. Anyone can publish a server that exposes tools to AI agents, and most of those servers were built with functionality in mind and security as an afterthought — or not at all.
We surveyed 200 popular open-source MCP servers across GitHub. The results were concerning:
- 73% had no input validation beyond basic type checking
- 61% logged credentials in debug mode
- 84% had no rate limiting implementation
- 45% made outbound network calls not documented in their README
- 29% cached user data locally with no expiration or cleanup
For individual developers experimenting in sandboxed environments, this is fine. For enterprises connecting MCP servers to production systems — CRMs, financial platforms, HR systems, customer databases — it’s a non-starter.
The Enterprise Trust Problem
When an enterprise security team evaluates an MCP server for production deployment, they ask a specific set of questions:
Who audited this? Open-source servers are typically authored by individuals or small teams. The code is public, but “public” is not the same as “reviewed.” Has anyone with security expertise examined the server’s input handling, credential management, and data flow? In most cases, no.
What happens if it exfiltrates data? MCP servers sit between AI models and external services. They receive tool call arguments (which may contain sensitive business data) and make API calls to third-party services. A malicious or poorly written server could forward that data to an unauthorized endpoint. Most servers provide no guarantees about data boundaries.
Does it handle errors gracefully? When a tool call fails — invalid input, API timeout, rate limit hit, authentication expired — what happens? Does the server return a structured error that the AI model can reason about? Or does it crash, hang, or return an ambiguous response that causes the downstream workflow to fail silently?
Does it respect rate limits? Enterprise APIs have rate limits. Salesforce, HubSpot, Stripe, ServiceNow — they all enforce per-account request quotas. An MCP server that doesn’t implement rate limiting will burn through an organization’s API quota in minutes when connected to a batch workflow, potentially disrupting other integrations that share the same API credentials.
Most MCP servers in the wild cannot answer these questions satisfactorily. That’s why enterprises either avoid MCP integration entirely or spend weeks building their own servers from scratch — neither of which is a good outcome.
Three Tiers of Certification
JieGou’s MCP Marketplace uses a three-tier certification model. Every server in the marketplace has passed at least one tier, and the tier is displayed on the server’s card so organizations can make informed decisions about what they connect to production systems.
Community
The baseline tier. Community-certified servers pass automated testing that validates protocol compliance:
- Schema validation — Tool definitions conform to the MCP specification. Input schemas are valid JSON Schema. Output schemas are defined and accurate.
- Tool discovery — The server correctly responds to
tools/listrequests. Tool names, descriptions, and parameter definitions are complete and well-formed. - Basic invocation — Each tool can be called with valid inputs and returns a well-formed response. No crashes, no hangs, no undefined behavior on happy-path execution.
Community certification is automated and takes minutes. It confirms that the server works as documented. It does not validate security properties.
Verified
Verified servers pass the full functional test suite, which goes beyond happy-path testing:
- Invocation completeness — Every tool is tested with valid inputs, edge-case inputs (empty strings, maximum-length values, unicode, special characters), and invalid inputs. The server handles all three categories without crashing.
- Error handling — Invalid inputs return structured MCP error responses with appropriate error codes. The server does not expose stack traces, internal state, or implementation details in error messages.
- Idempotency — Read operations are idempotent. Write operations that claim to be idempotent are verified. The test suite runs each tool multiple times with identical inputs and validates consistent behavior.
- Connection lifecycle — The server handles connect, disconnect, and reconnect gracefully. Abrupt disconnections (simulating network failures) don’t leave orphaned resources or corrupted state.
Verified certification requires manual review in addition to automated testing. A JieGou engineer reviews the server’s implementation, runs the test suite, and validates the results before promoting the server.
Enterprise
Enterprise-certified servers pass Verified testing plus a security review. This is the tier that matters for production deployment in regulated environments.
What Enterprise Certification Tests
Enterprise certification covers four security domains, each with specific test cases:
Input sanitization. MCP tool arguments come from AI models, which means they can contain anything — including adversarial inputs. Enterprise certification tests for:
- SQL injection payloads in string arguments that flow to database queries
- Path traversal attempts (
../../../etc/passwd) in file-path arguments - XSS payloads in arguments that may be rendered in web interfaces
- Command injection in arguments that are passed to shell commands
- Oversized inputs designed to cause out-of-memory errors
- Nested object depth attacks designed to cause stack overflows
Every server must sanitize all inputs before they reach external systems. Allowlist-based validation is preferred over denylist-based filtering.
Credential handling. MCP servers manage authentication with external services. Enterprise certification verifies:
- Credentials (API keys, OAuth tokens, service account keys) are never logged, even in debug mode
- Credentials are stored in memory only for the duration of the connection, not persisted to disk
- Token rotation is handled correctly — expired tokens are refreshed before use, and rotation doesn’t drop in-flight requests
- Credential errors (invalid key, revoked token, expired certificate) return clear, actionable error messages without leaking the credential value
Rate limiting. Enterprise certification validates that the server respects the rate limits of the external service it connects to:
- The server implements rate limiting that matches the provider’s documented limits
- When a rate limit is hit, the server returns a structured retry-after response rather than failing or silently dropping the request
- Graceful degradation under sustained load — the server queues requests and processes them as quota becomes available rather than rejecting them immediately
- Per-account rate tracking when the server supports multiple concurrent connections
Data boundary. This is the trust domain that matters most:
- The server makes no outbound network calls beyond the documented API endpoints. No telemetry, no analytics, no phone-home behavior.
- User data passed in tool arguments is not cached, logged, or stored beyond what’s required for the current request
- No data is shared between connections from different accounts
- Data residency is configurable — servers that process data can be pinned to specific regions for organizations with geographic data requirements
Servers that pass all four domains receive Enterprise certification and a corresponding badge in the marketplace.
300+ and Growing
JieGou’s MCP Marketplace currently offers 300+ servers across 16 categories — Productivity, Finance, Developer Tools, HRIS, Data, Communication, Project Management, Security, CRM, ITSM, Design, ERP, Analytics, Marketing, AI/ML, and more.
Every server in the marketplace is at least Community-certified. The majority are Verified. And a growing number — particularly those in the Finance, HRIS, Security, CRM, and ITSM categories — carry Enterprise certification.
The community contribution pipeline makes this scale possible. Developers can submit MCP servers to the marketplace through a structured process: submit the repository URL and metadata, the server enters the automated Community certification pipeline, and accepted servers are promoted to the marketplace. Servers that pass additional review can be promoted to Verified and Enterprise tiers.
Hackathons accelerate growth in specific categories. When the marketplace team identifies a gap — say, legal tech integrations or healthcare APIs — a focused hackathon brings contributors together to build and certify servers in that category. The most recent hackathon added 22 servers to the HRIS and Finance categories in a single week.
The goal is not to be the biggest MCP marketplace. There are directories with more servers. The goal is to be the enterprise-grade MCP marketplace — the one where every server has been tested, every server has a certification badge, and every server meets the security standards that enterprise deployments require.
Why Certification Matters Now
The MCP ecosystem is at an inflection point. Protocol adoption is no longer the bottleneck — trust is. Organizations want to connect AI agents to their business systems through MCP, but they need confidence that the servers they connect are secure, reliable, and well-behaved.
Certification provides that confidence. When an enterprise security team sees an Enterprise-certified badge on a JieGou MCP server, they know it has passed input sanitization testing, credential handling review, rate limiting validation, and data boundary verification. They know a human reviewed the code. They know the server is continuously monitored for regressions.
That’s the difference between 10,000 servers and 300 servers you can actually deploy to production.