Auth fails at startup
Server logs SYNOLOGY_API_ERROR { inner_code: 400 } and refuses to start /ready.
Most common cause: the password contains a shell metacharacter and your env file did not quote it. The server logs the failing field, never the value.
# Wrong — $ expands in your shell, not in the env fileSYNOLOGY_PASSWORD=p@$$word# Right — single-quote, or escapeSYNOLOGY_PASSWORD='p@$$word'
Other causes, in order of frequency: 2FA enabled on the user (use a dedicated MCP user without 2FA), IP blocked by DSM after failed attempts (DSM → Security → Block List), wrong port (5000 vs 5001 — HTTPS uses 5001 by default).
TLS handshake fails
Self-signed certificates are the typical case. The server refuses to accept them by default. Two correct fixes:
- 01Trust the NAS certificate on the host (preferred). Linux: install to /usr/local/share/ca-certificates and run update-ca-certificates. macOS: add to System keychain and mark as Always Trust.
- 02Issue a real certificate via Synology DSM → Security → Certificate. Let’s Encrypt support is built in for public-DNS-resolvable hostnames.
PATH_GUARD_VIOLATION on every call
You set DRIVE_ROOT_PATH=/team but the agent is calling drive_list_files({ path: "/Team" }) (capital T). DSM Drive paths are case-sensitive on btrfs and case-insensitive on ext4 — but the path-guard always compares case-sensitive.
Fix: pick one casing in DRIVE_ROOT_PATH and ensure agent prompts use the same. Convention: lowercase top-level folders.
SSE refuses to start on 0.0.0.0
By design. The server requires MCP_AUTH_TOKEN before it will bind a non-loopback interface. Generate a token, add it to env, and use it in the Authorization header from your client.
openssl rand -hex 32→ a93f...e2c1# In env fileMCP_AUTH_TOKEN=a93f...e2c1# In clientAuthorization: Bearer a93f...e2c1
RATE_LIMITED under load
The default of 20 rps is conservative — a single agent rarely hits it; a fan-out workflow does. Raise it consciously, or batch.
Prefer batching. spreadsheet_batch_update collapses N writes into one rpc; drive_search_files with a higher limit collapses N reads. The rate limit is per-client and exists to protect the NAS, not the MCP server.
Reading the logs
Logs are JSON-per-line. Three fields matter: tool, code, and ms.
{"level":"info","ts":"2025-04-30T14:22:11.043Z","tool":"drive_list_files","ms":87,"ok":true}{"level":"warn","ts":"2025-04-30T14:22:14.519Z","tool":"drive_upload_file","code":"CONFIRM_REQUIRED","ms":2}{"level":"error","ts":"2025-04-30T14:22:31.882Z","tool":"spreadsheet_open","code":"NOT_FOUND","ms":134}
Credentials, session ids, and full file contents are redacted before logging — the redaction happens at the transport layer, not at the log call site, so it cannot be bypassed by a future logger.