VPSSpark Blog
← Back to Dev Diary

2026 OpenClaw Linux cloud VPS: multi-profile parallel gateways — port matrix, systemd user units, OPENCLAW_* directory isolation, reproducible deploy and conflict FAQ

Server Notes · 2026.04.27 · ~7 min read

Linux server room representing multiple isolated OpenClaw Gateway profiles on one VPS

One small Linux cloud instance can host more than one OpenClaw Gateway-shaped workload: a production bot, a staging bridge, and a personal experiment — each with its own tokens, config, and listen address. The failure mode is never “Linux cannot multitask”; it is colliding ports, shared state directories, and systemd units that fight over the same user or working directory. This note lays out a reproducible pattern: a port matrix, per-profile systemd --user units, explicit OPENCLAW_* directory isolation, and a short FAQ for the conflicts we still see in 2026.

N
Profiles ≤ unique ports + dirs
user@
systemd user slice isolation
1
Matrix doc checked in with infra

Why run parallel profiles on one host?

Separate VMs cost money and IPv4; separate containers still share kernel port space unless you map carefully. Parallel profiles on one VPS work when each profile owns a non-overlapping bind port, a dedicated OS user or home for file permissions, and a documented environment prefix so operators never paste the wrong token into the wrong unit. Treat “profile” as an operations concept: prod/stage/dev, or team A vs team B — not as three copies of the same default config path.

Port matrix: bind addresses before reverse proxies

Decide loopback vs public bind before you enable TLS in front. Most teams keep each Gateway on 127.0.0.1 with Nginx or Caddy on 0.0.0.0:443; parallel profiles then differ only by loopback port. If you must expose HTTP directly, extend the matrix with firewall rows per security group. For exposure patterns and SSH vs HTTPS trade-offs, see 2026 OpenClaw Linux cloud hosts: minimal attack surface — firewall templates, Gateway loopback binding, SSH tunnel management vs public HTTPS (matrix + FAQ).

Profile HTTP bind Notes
prod 127.0.0.1:18789 Stable unit name; pinned in load balancer upstream
stage 127.0.0.1:18790 Never reuse prod tokens; separate systemd unit
dev 127.0.0.1:18791 Optional: restrict to VPN or SSH tunnel only
Same binary, different universe
Parallel gateways are not “the same service twice” unless you intentionally duplicate binaries. Keep one install path per machine image, and vary only env files, ports, and state roots so upgrades roll forward once.

systemd user units: linger, slices, and log boundaries

systemd --user services under distinct Linux users give clean journalctl --user -u … boundaries without crowding /etc/systemd/system. Enable loginctl enable-linger for the service accounts that must survive logout. Pair each unit with WorkingDirectory= and EnvironmentFile= pointing at a profile-specific drop-in so daemon-reload never merges two profiles by accident. If you prefer system-level units, use templated units ([email protected]) — the invariant is the same: one unit graph node per profile, one port row in the matrix.

Per-profile env file (example fields)
# /etc/openclaw/prod.env — keep mode 640 and owner = service user
OPENCLAW_STATE_DIR=/var/lib/openclaw/prod
OPENCLAW_CONFIG_PATH=/etc/openclaw/prod.yaml
# Gateway listen — must match port matrix
OPENCLAW_GATEWAY_BIND=127.0.0.1:18789

OPENCLAW_* directories: isolate state, not just config

Config files are obvious; runtime state (sessions, local caches, downloaded artifacts) is where silent cross-talk happens. Set OPENCLAW_STATE_DIR (and any companion variables your distribution documents) to disjoint paths per profile — for example /var/lib/openclaw/prod vs /var/lib/openclaw/stage — and enforce ownership with the matching service user. Backups and disk quotas then align with blast radius: wiping stage never touches prod tokens on disk.

Home-directory defaults
If you skip explicit state roots, two user-level units under the same account may still converge on the same default under ~/.config or ~/.local/share depending on packaging — always set explicit paths in env for parallel profiles.

Reproducible deploy checklist

Check the port matrix and env files into the same repository as your Ansible or cloud-init. Apply changes with systemctl --user daemon-reload (or system template reload), then systemctl restart profiles in dependency order: dependencies first, edge listeners last. Document a single rollback: previous env file revision plus previous unit drop-in. Persistence differs from macOS launchd; for a side-by-side mental model when you also operate cloud Macs, read Deploying OpenClaw on a cloud Mac in 2026: macOS checks vs Linux VPS, launchd persistence, and a reproducible FAQ.

Smoke test after adding a profile

Before you announce a new listener to the team, walk a three-step ladder so tickets stay short. First, confirm only the intended sockets are open with ss -lntp and that each PID maps to the unit you expect from systemctl status. Second, curl the loopback row from the matrix with an explicit health or version path your Gateway exposes, not only the root URL. Third, hit the public hostname through the reverse proxy and compare response headers with the direct loopback probe — mismatches here almost always mean upstream port drift, not TLS.

Loopback ladder (adjust ports/paths)
ss -lntp | grep -E '18789|18790|18791'
curl -svS http://127.0.0.1:18789/health
curl -svS https://prod-gateway.example.com/health

When something still looks “healthy” but the wrong automation fires, diff the effective environment the unit actually sees: systemctl show for the service block, then compare against the checked-in env file. Silent drift from manual edits on the host is the main reason reproducible repos and live machines disagree after a month of on-call tweaks.

FAQ: conflicts we debug in parallel setups

“Address already in use” after reboot — a stale user session left a second copy running outside systemd; use ss -lntp and match PIDs to units. Wrong bot answers in staging — token file path reused; grep unit files for duplicate EnvironmentFile. One profile’s disk fills the VPS — attach quota or log rotation per OPENCLAW_STATE_DIR, not globally. HTTPS works for one hostname only — proxy upstream block points at the wrong loopback port row; fix the matrix, not ACME.

Linux for gateways, cloud Mac for the rest of the loop

Parallel OpenClaw profiles on a Linux VPS are a textbook fit for low idle power, static IPs, and predictable systemd lifecycle — the same reasons teams park always-on bots next to burst workloads. When the other side of your workflow needs Xcode, signing, or Apple-only CLIs, a VPSSpark cloud Mac mini gives native macOS plus Unix ergonomics: Homebrew, SSH, and containers without the friction common on Windows, while Apple Silicon unified memory keeps interactive builds from stalling.

macOS stability, Gatekeeper, and SIP reduce surprise breakage compared with ad-hoc desktops, and the M4 Mac mini’s roughly 4W idle draw keeps an always-on helper machine next to your VPS economically sane.

If you are splitting always-on Linux gateways from Apple-only work, VPSSpark cloud Mac mini M4 is a practical bridgeexplore plans now and keep both halves of the stack on solid footing.

Limited offer

Parallel Linux profiles wired — need a cloud Mac next?

Pair a small VPS with VPSSpark Mac mini M4 for signing, Xcode, and burst CI

Back to home
Limited offer See plans now