Once an OpenClaw Gateway has lived on a small Linux VPS for a quarter, you stop arguing about Docker and start arguing about who pushes the next upgrade. In 2026 the realistic fork is GitLab CI/CD auto-deploy (merge requests, protected branches, runner-driven SSH) versus a pure SSH manual update (you in a terminal, openclaw update or docker compose pull, then health checks). Both ship; they fail differently. This piece is the GitLab-flavored sibling of our OpenClaw Gateway on Linux VPS in 2026: GitHub Actions CI/CD vs Manual Docker Deploy — Decision Matrix, Repro Steps, and FAQ, with the same production-onboarding baseline assumed.
Decision matrix: when GitLab CI/CD wins, when pure SSH wins
Read the table as a regret minimizer — “green” is not mandatory, it is the side you would rather be on when the relevant incident actually fires.
| Concern | GitLab CI/CD auto-deploy | Pure SSH manual update |
|---|---|---|
| Deploy frequency | Strong > weekly: pipeline pays back | Strong < weekly: less YAML to maintain |
| Audit & bus factor | MR + pipeline log = source of truth | Only as good as your shell history file |
| Secrets handling | Masked/protected variables, environments | Env files on disk; tighten chmod 600 |
| Rollback | Re-run prior pipeline at the pinned digest | Keep compose.yml.prev + last image |
| Time to first working Gateway | Slower (runner, SSH key, variables) | Often fastest for one admin |
| Drift risk | Low — only the pipeline touches prod | High — “quick fix on the box” lives forever |
GitLab CI/CD: a minimal Gateway pipeline that does not lie
The smallest pipeline that earns its keep has three jobs: build (image with digest), deploy (SSH into the VPS via a protected job), and verify (loopback health probe). Tag it on protected branches, gate the deploy on a manual environment, and never let a feature branch reach prod.
# Stages: build → deploy → verify stages: [build, deploy, verify] build_image: stage: build script: - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA only: [main] deploy_vps: stage: deploy environment: { name: production, url: https://gateway.example.com } when: manual # human gate before prod before_script: - eval $(ssh-agent -s) && echo "$SSH_PRIVATE_KEY" | ssh-add - - mkdir -p ~/.ssh && ssh-keyscan $VPS_HOST >> ~/.ssh/known_hosts script: - ssh deploy@$VPS_HOST "cd /srv/openclaw && export IMAGE=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA && docker compose pull && docker compose up -d" verify_health: stage: verify script: - ssh deploy@$VPS_HOST "curl -fsS http://127.0.0.1:<health-port>/health"
- Protected variables — store
SSH_PRIVATE_KEY,VPS_HOST, registry token as masked + protected; restrict to protected branches/tags only. - Pin by digest, not by tag — promote
$CI_COMMIT_SHA(or the resolvedsha256:) so re-runs are deterministic;:latestin production is how Friday afternoons go bad. - Manual prod gate — the production environment job stays
when: manual; pipelines are auditable, but humans still pull the trigger. - Runner separation — use a dedicated runner (or a tagged shared one) for deploy jobs; the build runner should not hold prod SSH keys.
Pure SSH manual update: a runbook, not a vibe
Manual deploy is fine — provided you treat it as a runbook, not muscle memory. The point is to make the next person (or future-you at 2 a.m.) able to repeat what you did.
- Pre-flight — record the current image digest and config hash:
docker compose images > ~/openclaw/state/$(date +%F).txt. - Snapshot state — tar the persistent volumes (tokens, sessions, sqlite). Without this, “rollback” is wishful thinking.
- Apply —
openclaw updatefor managed installs, ordocker compose pull && docker compose up -dfor compose-based ones. - Verify — loopback health, then run
openclaw doctor; if you proxy through nginx/Caddy, curl the public URL too. - Roll back if needed —
docker compose down && IMAGE=<prev-digest> docker compose up -d; restore volumes only if the upgrade migrated state.
versions.txt beside compose.yml in a private repo and commit after every successful change.
FAQ: pipeline pitfalls we keep seeing
The pipeline goes green but the Gateway is still on the old image?
Almost always a tag-vs-digest issue. Either the deploy job pulled :latest (cached on the host) or docker compose reused an existing container because nothing in compose.yml changed. Pin image: to the digest produced by the build job and add --force-recreate when in doubt.
SSH from the runner suddenly fails with “host key verification failed.”
The VPS was reinstalled or its SSH host key rotated. Refresh known_hosts via ssh-keyscan in before_script, and store the expected fingerprint as a CI variable so a silent host swap fails loudly instead of trusting whatever answers.
Where should secrets live — GitLab or the VPS?
Channel tokens and registry credentials belong in GitLab as masked + protected variables; long-lived state (session DBs, OAuth refresh tokens already issued to the Gateway) stays on the VPS volume. The pipeline should never need to read user-session data.
Can we keep manual SSH and still get an audit trail?
Yes — the practical hybrid is: GitLab builds and pushes the image (audited, reproducible), then a human SSHes in to docker compose pull the exact digest. You skip the deploy-job complexity but keep registry tags as your timeline.
Pair the Linux Gateway with a cloud Mac mini for the half pipelines forget
Whether you go full GitLab CI/CD or stay on disciplined SSH, the Linux side of your stack handles webhooks, channel tokens, and cheap always-on networking very well. What it cannot do is run Xcode, sign iOS artifacts, or hold a stable build environment for Apple-native work. A cloud Mac mini M4 brings the same Unix-friendly toolchain you already use on your VPS — SSH, Homebrew, containers where they fit — while Apple Silicon’s unified memory keeps native compile and link steps responsive in a way that no equally priced Windows or Linux box reproduces.
macOS is also a good citizen at the edge of long-running automation: a Mac mini class machine idles around ~4 W, runs near-silently, and shows the kind of multi-month uptime that suits an unattended runner sitting next to your GitLab pipeline. Gatekeeper, SIP, and FileVault narrow the malware surface compared with typical PCs — particularly relevant when the same machine touches signing keys for production releases.
If you are already standardizing how the Linux Gateway is deployed, it is the right moment to standardize the Apple-native half too. VPSSpark cloud Mac mini M4 is the most cost-effective way to give your GitLab (or SSH) workflow a real macOS counterpart — explore plans now and stop letting iOS builds be the part of your pipeline nobody automates.