OpenClaw turned my Mac Mini into a 24/7 AI employee

OpenClaw (formerly Clawdbot) is a self-hosted AI agent that connects Claude, GPT, and Gemini to your messaging apps and desktop. Here's how to set it up on a Mac Mini.

OpenClaw turned my Mac Mini into a 24/7 AI employee

Peter Steinberger — the guy behind PSPDFKit — released an open-source project in late 2025 that quietly became one of the fastest-growing repositories in GitHub history. It launched as Clawdbot, briefly became Moltbot after Anthropic sent a trademark notice, and is now called OpenClaw. It hit 9,000 GitHub stars in 24 hours and crossed 168,000 in three weeks. Logan Kilpatrick, head of Google AI Studio, publicly endorsed it.

The hype is real, and for once, so is the substance.

What OpenClaw actually is

OpenClaw is a self-hosted AI agent that does two things:

  1. Connects to LLMs. Claude, GPT, Gemini — you pick the model, plug in your API key or OAuth, and the agent routes your requests to it.
  2. Connects to your messaging apps. WhatsApp, Telegram, Slack, Discord, iMessage, Signal, Microsoft Teams, Google Chat, Matrix — basically anything you already use to communicate.

The result: you text your Mac Mini from your phone, and an AI agent with full system access executes the task. Edit a spreadsheet. Run a script. Search your files. Draft a response. It’s not a chatbot — it’s a personal AI with hands.

The key architectural decision is that everything runs locally. Your prompts and files never leave your hardware except when sent to whichever model API you’ve configured. This is the self-hosted approach 36Kr highlighted — infrastructure you own, not a cloud service that owns you.

Why this needs a dedicated machine

You don’t want this running on your daily driver. OpenClaw has full system access — it can read files, run shell commands, control your browser. That’s the point, but it’s also why isolation matters.

A Mac Mini is the obvious choice here. I covered the hardware argument in detail in the agent server article — unified memory, Neural Engine, UNIX stability, $499 starting price, 5-watt idle draw. But OpenClaw adds another reason: iMessage.

iMessage integration is the feature most people want, and it requires macOS. You can run OpenClaw on Linux or Windows via WSL2, but if you want to text your AI agent from your iPhone and have it respond in iMessage, you need a Mac. A Mac Mini M4 is the cheapest way to get there.

Setting it up

The whole process takes about ten minutes. Here’s the walkthrough.

Prerequisites

  • A Mac Mini (or any Mac, but the Mini is purpose-built for this)
  • Node.js 22 or higher
  • An API key or subscription for your preferred LLM (Anthropic, OpenAI, or Google)
  • At least one messaging app you want to connect

Step 1: Install OpenClaw

Open Terminal and run the one-line installer:

curl -fsSL https://openclaw.ai/install.sh | bash

This handles Node detection, installs the CLI, and launches the onboarding wizard. If you prefer manual installation:

npm install -g openclaw@latest
openclaw onboard --install-daemon

The --install-daemon flag installs a launchd service so the Gateway stays running in the background — even after you close Terminal, even after a reboot. This is critical. The whole point is that this thing runs 24/7.

Step 2: Run through the wizard

The wizard asks you a few questions:

Safety acknowledgment. It’ll ask you to confirm you understand this gives an AI agent system access. This is real — treat the machine as a sandbox, not your production box.

Onboarding mode. Pick Quickstart unless you want granular control over every setting.

AI model provider. Choose Anthropic, OpenAI, or Google. You’ll paste your API key or authenticate via OAuth. The docs recommend Claude Pro/Max with Opus 4.6 for long-context strength and prompt-injection resistance. I agree — Opus 4.6’s instruction following without drift matters here because the agent runs unattended.

Messaging channel. Pick your app. For WhatsApp, you’ll scan a QR code. For Telegram, you’ll configure a bot token. For iMessage, the Mac handles it natively.

Skills. OpenClaw has a skill registry called ClawHub. You can enable capabilities like web search (requires a Brave API key), browser control, file access, and more. You can say no to anything you don’t need yet and add skills later.

Step 3: Configure your model

The minimal configuration lives in ~/.openclaw/openclaw.json:

{
  "agent": {
    "model": "anthropic/claude-opus-4-6"
  }
}

That’s it for basic operation. The full config reference covers gateway settings, channel-specific options, security policies, and model fallbacks.

Step 4: Go headless

If you’re running this on a dedicated Mac Mini, set it up for remote access:

  • System Settings → General → Sharing → Remote Login — enables SSH
  • System Settings → General → Sharing → Screen Sharing — for when you need a GUI
  • System Settings → General → Startup Disk → Options — set to auto-restart after power failure

Now close the lid on your laptop. The Mac Mini keeps running, the Gateway keeps listening, and you interact with your agent through your messaging app of choice.

How it works under the hood

OpenClaw runs a local Gateway — a WebSocket control plane on ws://127.0.0.1:18789 — that coordinates everything. Messages come in from your connected channels, get routed to an embedded agent called Pi, which sends the prompt to your model provider. The model responds with instructions, and Pi executes them locally.

This is the architecture 36Kr described as a “dual-connection model” — LLMs on one end, desktop apps on the other, with your machine as the bridge.

The local-first design avoids the exponential cost problem of routing everything through cloud APIs. Orchestration happens on-device. Only the actual model inference hits the API. This is the same principle behind the hybrid architecture I described in the agent server article — the Mac Mini is the orchestration layer, and cloud APIs are reserved for the heavy reasoning.

Persistent memory

OpenClaw has persistent memory across sessions. Tell it something today, it remembers it next week. This is the passive context principle applied at the system level — the agent doesn’t need to retrieve your preferences or project details on demand. They’re already loaded.

Combine this with AGENTS.md files in your project repos and specs describing what you’re building, and the agent has deep context without you repeating yourself. The Mac Mini never sleeps, the context never expires.

Security: frame it positively

OpenClaw has full system access. It can read your files, run commands, control your browser. This is powerful and potentially dangerous — which is exactly why it belongs on a dedicated Mac Mini, not your primary machine.

Set the boundaries in terms of what the agent can do, not what it can’t. It can access your development projects. It can run tests. It can draft messages. It can search the web. It can manage files in designated directories. Keep production credentials, customer data, and deployment keys off this box entirely. Those get injected through your CI pipeline, not stored on the agent server.

The DM pairing mode on messaging platforms is enabled by default — meaning the agent only responds to your direct messages, not random people in group chats. Leave this on.

Uninstalling

If you want to remove it:

openclaw uninstall

Select all components, confirm, done. Clean removal.

The bigger picture

OpenClaw is what happens when you take the loop and make it always-on. You’re not sitting at your desk prompting an agent anymore — you’re texting it from the grocery store and coming home to finished work.

The setup is a Mac Mini running OpenClaw with Opus 4.6 as the model, your projects cloned with specs and AGENTS.md files, behavioral rules defining how the agent operates, and positive security framing keeping it in scope. That’s the full stack — from good prompts to dedicated hardware.

168,000 GitHub stars in three weeks isn’t hype. It’s people realizing that a $499 computer and an open-source project is all it takes to have a personal AI that actually does things.