If you’ve installed more than a couple MCP servers, you’ve already discovered the truth: MCP server management becomes an ops problem fast.
One teammate adds a “quick” server in Cursor, another pins a different version for Claude Desktop, VS Code silently uses a workspace .vscode/mcp.json, and suddenly nobody can reproduce tool behavior across machines.
This playbook is for agencies and builders who want repeatable, workflow-grade MCP server management: install, update, toggle, and sync servers across Claude Desktop, Cursor, and VS Code—without drowning your agent context in tool bloat.
Why MCP server management breaks in the real world
Most messy MCP setups fail in one (or more) of these four ways:
- Config drift: every client (and sometimes every workspace) has “the” config.
- Tool bloat: dozens of tools load into context when you only needed three.
- Update breakage: a server update changes behavior or dependencies and bricks your team.
- Permissions confusion: you can’t tell which tools are safe, destructive, or client-specific.
The fix is the same approach you’d use for any dependency you ship to production:
- a single source of truth
- explicit environments/profiles
- change control + smoke tests
- easy rollback
Step 1) Create a single source of truth (SSOT) for servers
Treat MCP servers like a dependency graph, not “plugins I installed once.” Put them in a repo (even if it’s just for you).
A simple structure that works well:
mcp/
catalog.json
profiles/
minimal.json
dev.json
client-acme.json
clients/
claude-desktop.json
cursor.json
vscode.workspace.json
scripts/
generate-config.mjs
smoke-test.mjs
A small but powerful catalog.json
Your catalog is your stable identifier layer: names, where the server comes from, and how it should run.
{
"servers": {
"filesystem": {
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"],
"env": {
"FILESYSTEM_ROOT": "${WORKSPACE_ROOT}"
},
"risk": "high"
},
"docs": {
"transport": "streamable-http",
"url": "https://example.com/mcp",
"headers": {
"Authorization": "Bearer ${DOCS_TOKEN}"
},
"risk": "low"
}
}
}
Notes:
- MCP supports stdio (client starts a subprocess) and Streamable HTTP transports; stdio is commonly recommended when possible.
- Keep secrets out of Git. Prefer env vars, input variables, or OS keychains.
Step 2) Define “profiles” to reduce MCP tool bloat
The rule that keeps setups healthy:
Default to the minimal toolset for the current workflow. Expand temporarily, then shrink again.
Profiles make this easy:
// profiles/minimal.json
{
"enable": ["docs"],
"disable": ["filesystem"]
}
// profiles/dev.json
{
"enable": ["docs", "filesystem"]
}
For agencies, profiles are where you encode client boundaries:
client-acme.json: only the tools ACME approvedclient-beta.json: different CRM, different docs, different permissions
This is the lightweight version of “tool governance”: you don’t need a committee—you need defaults.
Step 3) Sync strategy across Claude Desktop, Cursor, and VS Code
Different clients, different config shapes, different file locations. Don’t fight it—generate client configs from your SSOT.
Known config entry points
- Cursor commonly uses
~/.cursor/mcp.json. - VS Code supports workspace
.vscode/mcp.json, plus a user-profile configuration you can open via MCP commands; it also offers inline start/stop/restart actions when editing the file. - Claude Desktop has historically used a JSON config file; some documentation references
~/.claude/claude_desktop_config.json.
Because locations can vary by OS/app version, your generator should target the path you’ve standardized internally (and your onboarding checklist should verify it).
Example generator (Node.js)
This script takes:
catalog.json- a
profile.json - outputs the minimal
mcpServersmap each client expects
// scripts/generate-config.mjs
import fs from "node:fs";
const catalog = JSON.parse(fs.readFileSync("mcp/catalog.json", "utf8"));
const profile = JSON.parse(fs.readFileSync(process.argv[2], "utf8"));
const enabled = new Set(profile.enable ?? []);
const disabled = new Set(profile.disable ?? []);
const mcpServers = {};
for (const [name, def] of Object.entries(catalog.servers)) {
if (enabled.size && !enabled.has(name)) continue;
if (disabled.has(name)) continue;
// Normalize a few common shapes across clients
if (def.transport === "stdio") {
mcpServers[name] = {
type: "stdio",
command: def.command,
args: def.args,
env: def.env
};
} else {
mcpServers[name] = {
type: "streamable-http",
url: def.url,
headers: def.headers
};
}
}
const out = { mcpServers };
fs.writeFileSync(process.argv[3], JSON.stringify(out, null, 2));
console.log(`Wrote ${process.argv[3]} with ${Object.keys(mcpServers).length} servers`);
Usage:
node scripts/generate-config.mjs mcp/profiles/minimal.json mcp/clients/cursor.json
node scripts/generate-config.mjs mcp/profiles/dev.json mcp/clients/vscode.workspace.json
Then either:
- copy the generated file into the client-specific location, or
- symlink it (best for operator machines)
mkdir -p ~/.cursor
ln -sf "$PWD/mcp/clients/cursor.json" ~/.cursor/mcp.json
Step 4) Updates and rollback (the discipline that saves you)
Updates are where MCP setups die—especially with stdio servers that pull runtime deps.
A simple update policy that works:
- Pin versions where you can (npm, pip, git tags).
- Batch updates into a weekly/biweekly change window.
- Smoke test the profile(s) you actually run.
- Tag a last-known-good snapshot so rollback is one command.
Smoke test idea: “list tools and run one safe call”
Even if every MCP server is different, your smoke test can still be consistent:
- verify the server starts
- verify tools are discovered
- run a read-only tool if available
At minimum, validate your JSON before you even open the client:
cat ~/.cursor/mcp.json | python -m json.tool >/dev/null
Prefer packaged installs when available
Claude Desktop now supports Desktop Extensions (MCP bundles) to reduce manual install and dependency issues, including updates handled by the extension system.
If a server you rely on ships as a bundle, that’s often an ops win.
Step 5) Troubleshooting: “servers not showing” and other common failures
When MCP servers don’t show up, run this checklist in order.
1) Restart the client
It sounds trivial, but many setups only load MCP config at startup; even vendor docs frequently recommend a restart after config changes.
2) Confirm you edited the right config file
- VS Code can be workspace (
.vscode/mcp.json) or user-profile config. - Cursor can be global (
~/.cursor/mcp.json) or sometimes project-local depending on your workflow.
Standardize on one approach for your team:
- Agencies: keep workspace config for client-specific servers, global config for personal/dev utilities.
- Solo builders: global config + per-repo overrides.
3) Validate JSON and name collisions
- invalid JSON = silently ignored by some clients
- duplicate server names = “last one wins” behavior you won’t notice
4) Check auth and secrets handling
If a server uses HTTP + headers, verify:
- token exists in your env
- token has the right scope
- you didn’t accidentally commit a placeholder
VS Code explicitly recommends avoiding hardcoded secrets and using safer mechanisms.
5) Look for stderr logs (stdio transport)
The MCP spec allows servers to log to stderr, and clients may capture it. If tools never load, your first clue is often a missing runtime, a bad env var, or a crash loop.
Step 6) Lightweight governance and security guardrails
You don’t need heavyweight bureaucracy—just default safety.
Practical guardrails:
- Allowlist servers per profile (especially for client machines).
- Label risk in your catalog (
risk: highfor filesystem, shell, browser automation). - Require explicit enablement for destructive tools.
- Keep “break glass” profiles (temporarily enable more tools for debugging, then revert).
Also: some clients support “trust” flows. For example, VS Code has mechanisms for managing trusted MCP servers for authenticated accounts.
Where this goes next: from tool chaos to white-box workflows
If you’re an agency, MCP is a powerful layer—but it’s still easy to end up with:
- per-operator differences
- “it works on my machine” tool stacks
- accidental permission creep
That’s exactly why we built nNode: white-box, reusable workflows with explicit steps, integrations, and guardrails—so your automations are repeatable and auditable instead of being a pile of local editor configs.
If your MCP server management is starting to feel like production ops, it’s a good sign you’re ready to graduate from ad-hoc tool stacks to workflow-grade automation.
If you want to see what “white-box workflows” looks like in practice, take a look at nnode.ai (especially if you’re shipping automations for clients and need reliability without a bunch of local setup).