Use openclaw mcp serve when an MCP client should reach OpenClaw-backed conversations through a stdio bridge that joins your Gateway over WebSocket and exposes listing, transcript reads, live events, replies, and permission helpers—without custom per-channel glue. Below is the day-one checklist: launch patterns, token files, how allowlists split across client policy and OpenClaw, and why live queues reset per session. Still provisioning the host? See Deploying OpenClaw on a cloud Mac in 2026: macOS checks vs Linux VPS, launchd persistence, and a reproducible FAQ and 2026 OpenClaw Linux cloud VPS hands-on: curl install vs Docker, environment checks, and common errors FAQ.
What openclaw mcp serve actually starts
The MCP client spawns the bridge; while stdio stays open, it connects to a local or remote Gateway and mirrors routed conversations into MCP tools. Discovery needs Gateway route metadata—missing rows in conversations_list usually mean incomplete session routes, not bad MCP JSON. Read backlog with messages_read; events_poll / events_wait only cover events after the bridge connects.
# Local Gateway (default) openclaw mcp serve # Remote Gateway + token file (avoid shell history) openclaw mcp serve --url wss://gateway-host:18789 \ --token-file ~/.openclaw/gateway.token # Verbose bridge logs on stderr openclaw mcp serve --verbose
| Topology | Best for | Watch-outs |
|---|---|---|
| Local Gateway + local client | Laptop iteration, deterministic repro | Still treat approvals and sends as production-capable actions |
| Remote Gateway + token file | Shared team Gateway on a cloud Mac/VPS | Rotate tokens like SSH keys; scope filesystem permissions on the token path |
| Claude channel mode on | Claude Code-style clients that understand extra notifications | Generic clients should prefer polling tools; notifications are live-session only |
Token and password auth without leaking secrets
Remote Gateways accept the usual controls: --token / --token-file or --password / --password-file. Prefer file-based flags so secrets skip shell history. In CI, mount tokens read-only and keep WebSocket URLs out of shared logs. MCP JSON should reference paths, not inline secrets, with POSIX permissions tightened to the agent user.
Tool “whitelists”: three layers that are easy to conflate
Layer one: the bridge publishes a fixed tool surface—your MCP client’s allow/deny list is the first gate. Layer two: openclaw mcp list|set|unset stores outbound MCP definitions for other OpenClaw runtimes; writes never probe the remote server. Layer three: channel allowlists and pairing still decide who may talk; messages_send only follows an existing session route. Debug “missing tools” in that order: client profile, Gateway routes, channel policy.
Session isolation: queues, approvals, and Claude push
Each stdio session owns an in-memory live queue; disconnecting drops queued live state—use messages_read for durable backlog. permissions_list_open is likewise ephemeral, and Claude channel notifications only exist while the bridge stays up. Treat long-lived IDE sessions differently from short CI probes when you document approval timeouts.
pnpm test:docker:mcp-channels) that exercises conversation discovery, transcript reads, attachment metadata, live queue behavior, and notification paths—use it before you wire production Telegram/Discord routes.
FAQ we answer in incident reviews
Why does conversations_list return empty?
Confirm provider, recipient, and optional account/thread metadata exists; an inbound message often materializes the route.
Why did events_wait miss an older inbound message?
Expected: live queues start at connect. Read older lines with messages_read, then poll or wait for new events.
Should we enable Claude channel mode for everyone?
Default auto tracks “on” today—still, only enable Claude notifications for clients that implement them; generic agents should poll.
How do we keep CI from approving dangerous exec requests?
Treat permissions_respond like sudo: narrow auto-approvers, keep humans on new plugins, and never substitute bridge state for code review.
On a cloud Mac mini, this workflow stays boring—in a good way
Gateway-backed agents benefit from the same traits that make macOS a strong dev station: a native Unix userland, predictable launchd scheduling for always-on bridges, and Homebrew-quality packaging for the dependencies your MCP stack touches. Apple Silicon’s unified memory keeps Node and websocket workloads smooth under parallel sessions, while idle power on a Mac mini M4 often sits around a few watts—quiet enough to leave online for long-lived bridges without treating your desk like a datacenter.
Security-wise, Gatekeeper, SIP, and FileVault stack neatly with file-based tokens: you can scope secrets to a single service account and avoid the wider malware surface typical of always-on Windows boxes. That combination—efficiency, stability, and a cleaner trust boundary—usually beats juggling spare laptops when you want OpenClaw online 24/7.
If you are standardizing where the Gateway lives, VPSSpark cloud Mac mini M4 plans are a practical place to start—explore plans now and keep your MCP bridge on hardware that is built to run macOS all day.