Setting Up a Remote OpenClaw Instance on a Cloud VPS

Personal update

Mar 28, 2026

Tencent Cloud had a promotional offer-$6 for 6 months of a Linux VPS in Singapore (2 vCPUs, 2GB RAM, 40GB SSD, 20 Mbps peak bandwidth, 512 GB monthly traffic). I took it as a trial run for hosting an OpenClaw instance, moving from a purely local setup to a remote, always-available assistant. This post documents the actual steps, configuration choices, and early usage patterns.

Why a Remote Instance?

Before OpenClaw, I used chatbot UIs and GitHub Copilot directly in VS Code. While useful, they were:

I wanted to explore agentic AI - systems that can act autonomously with access to tools, memory, and scheduling.

Why not set up locally first? The constraints were clear:

  1. Privacy boundaries - my workstation contains sensitive research data; I wasn’t ready to give an AI agent that level of access
  2. Resource contention - with only 6 GB VRAM, the GPU is fully occupied by data-processing tasks (model training, large-dataset operations)
  3. Separation of concerns - keeping experimental AI workflows isolated from production research environments
  4. Psychological comfort - starting with a completely separate sandbox felt safer for a first attempt

This Tencent Lighthouse VPS is my first OpenClaw installation anywhere. A remote instance offered:

The VPS: Tencent Lighthouse Linux 2c

Specifications:

I chose Singapore for low latency to Thailand and because Tencent’s Lighthouse promo was straightforward-no hidden quotas, just a simple lightweight VM.

Installation: Straightforward npm

# On the fresh Ubuntu instance
ssh ubuntu@<vps-ip>
sudo apt update && sudo apt upgrade -y
sudo apt install curl git -y

# Node.js via NodeSource (OpenClaw requires Node.js 18+)
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs

# OpenClaw global install
sudo npm install -g openclaw

# Initialise workspace (creates ~/.openclaw)
openclaw init

No surprises. The openclaw init wizard asked for model-provider keys (I entered my OpenRouter API key) and set up the default workspace.

Network: Tailscale for Privacy

Instead of exposing the OpenClaw gateway to the public internet, I joined the VPS to my existing Tailscale tailnet. Tailscale gives each device a stable MagicDNS name (e.g., openclaw-vps.tailnet-xxxx.ts.net) and encrypts all traffic between nodes.

Why Tailscale?

After tailscale up, the OpenClaw gateway bound to the Tailscale IP, and I could reach it from any device in my tailnet.

Messaging: Back to Telegram

I initially tried Line, but Line’s recent requirement for a “business profile” made bot creation cumbersome for personal use. Telegram’s BotFather remains simple and reliable.

Bot setup:

  1. /newbot in BotFather
  2. Name the bot, get the API token
  3. Paste token into OpenClaw’s Telegram provider configuration
  4. Start chatting

One bot per agent for now, though I’m considering separate bots for different model tiers (e.g., a “fast/cheap” bot for quick Q&A and a “reasoning/expensive” bot for complex tasks).

Models & Cost Strategy

Initially,I tried with GitHub Copilot’s API (via education credits) and occasional free tiers of Gemini/OpenAI. Hwoever, I feel bad taking advantage of my privileges, I added OpenRouter with a $10 credit just for this experience to test various models without burning through my education allowances.

Current fallback chain (cheapest first):

  1. Free models (OpenRouter Auto, Hunter/Healer Alpha when available)
  2. DeepSeek v3.2 - my daily driver ($0.14/1M input, $0.28/1M output)
  3. GPT-5.4 Nano - for critical reasoning ($0.20/1M input, $1.25/1M output)
  4. GPT-5.4 Mini - reserved for complex analysis ($0.75/1M input, $4.50/1M output)

Cron jobs are set to use only free/cheap models. Paper-digest tasks that previously ran on GPT OSS 120b (“big”) now use DeepSeek v3.2-similar quality at a fraction of the cost.

Three-Day Usage Snapshot

The instance has been live for about three days. Typical daily pattern:

Morning (07:00-09:00 GMT+7):

Daytime (ad-hoc):

Evening (automated):

Rough cost estimate over three days:

At this rate, the $10 OpenRouter credit would last about a year of similar usage.

Example: Writing This Post via Telegram

This post itself was drafted through the remote OpenClaw instance:

  1. Initial request (Telegram): “Let’s write a post about my OpenClaw VPS setup”
  2. Q&A session - I answered your questions about provider, specs, network, models
  3. Draft generation - you produced a structured Markdown draft with code blocks
  4. Review/edit - I read the draft via the direct Hugo URL, suggested changes
  5. Finalisation - you incorporated my notes, added the technical footnote

The entire workflow happened without touching my local machine-pure Telegram → VPS → Hugo site.

Lessons So Far

What works well:

Potential improvements:

Why Keep This Post Private (For Now)?

This draft is marked private: true because:

  1. I want to live with the setup a few more weeks before declaring it “stable”
  2. The paper‑digest cron needs refinement – I’m still tuning the keyword extraction and relevance scoring
  3. The cost/token numbers are early estimates; I’d like a full month of data
  4. I might adjust the model fallback chain based on longer‑term usage
  5. This blog update is the first personal‑data integration – I haven’t yet connected emails, calendars, or other sensitive services

Hugo’s private‑post feature is perfect for this: I can build and view the post locally or on the live site via direct URL, verify formatting and content, then simply remove the private flag when ready. The post won’t appear in lists or be indexed by search engines until I make it public.

When I remove the private flag, the post will:

Until then, it’s accessible only via the direct link—useful for sharing drafts with collaborators.

The Agentic AI Learning Curve

Moving from chatbot UIs to OpenClaw has been educational:

From passive Q&A to active assistance:

From isolated to integrated:

From manual to automated:

The VPS trial is as much about learning agent workflows as it is about technical infrastructure. How do you design tasks an AI can execute reliably? How do you balance autonomy with oversight?

Most importantly: How comfortable do I become with delegating increasingly personal tasks? This blog-post update is the first test-allowing the agent to read my Hugo workspace and write to it. Email, calendar, and deeper file-system access would come later, only if this trial proves both technically robust and psychologically comfortable.

The 6-month, $6 VPS is essentially a comfort-zone testbed. If I find myself naturally relying on it for daily workflows without anxiety, that’s the green light to explore deeper integrations. If not, I’ve spent minimal resources to learn where my boundaries lie.

Technical Footnote

This post was drafted by DeepSeek v3.2 (via OpenRouter) running on the remote OpenClaw instance described above. The conversation happened over Telegram, from Bangkok to the Tencent Cloud VPS in Singapore, with final editing and publishing via the Hugo static-site generator. Total token cost for the drafting session: approximately $0.012.