Skip to main content
🎤 Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
🎤 Speaking at Red Hat Summit 2026 GPUs take flight: Safety-first multi-tenant Platform Engineering with NVIDIA and OpenShift AI Learn More
OpenClaw on Raspberry Pi
AI

Running OpenClaw on a Raspberry Pi with GPT-5-mini: The...

Why a Raspberry Pi paired with GPT-5-mini via GitHub Copilot Pro is the sweet spot for running OpenClaw. Full setup guide for under 50 dollars.

LB
Luca Berton
· 3 min read

The Hardware Debate: Big Iron vs Small Board

There’s a gamble happening right now in the AI agent community. One camp says buy a machine with 64GB+ RAM and hope local models catch up in capability. The other camp says run your agent on something tiny and use state-of-the-art API models. After running OpenClaw in production for months, I’m firmly in the second camp.

Why a Raspberry Pi Is Enough

OpenClaw’s gateway is lightweight. It’s a Node.js process that manages messaging channels (Discord, WhatsApp, Telegram), orchestrates tool calls, and routes prompts to an LLM provider. The actual inference happens elsewhere — on OpenAI’s servers, Anthropic’s infrastructure, or GitHub Copilot’s backend.

A Raspberry Pi 5 with 8GB RAM handles this effortlessly:

# Install OpenClaw on Raspberry Pi OS (64-bit)
curl -fsSL https://get.openclaw.ai | bash

# Or clone and build
git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
openclaw onboard

The gateway idles at roughly 150-250MB RAM. Even with multiple channels active, memory stays well under 1GB. CPU usage spikes briefly during tool execution but sits near zero otherwise — the Pi spends most of its time waiting for API responses.

GPT-5-mini: The Sweet Spot

Here’s the key insight: GPT-5-mini is free with GitHub Copilot Pro ($10/month). You get a state-of-the-art model that outperforms any local model you could run on consumer hardware.

Configure it in OpenClaw:

# openclaw.yaml
models:
  default: github-copilot/gpt-5-mini
  
providers:
  github-copilot:
    type: copilot
    # Auth handled via `openclaw configure --section copilot`

Compare this to running a local model:

  • Llama 3.3 70B needs 40GB+ VRAM for decent quality — that’s a $1,500+ GPU
  • Mistral 7B runs on 8GB but can’t match GPT-5-mini’s reasoning
  • Phi-3 is fast locally but struggles with complex tool use

GPT-5-mini gives you superior code generation, multi-step reasoning, and tool orchestration — all from a $35 board.

The Full Pi Setup

Here’s my production-grade Pi setup:

# Hardware
# - Raspberry Pi 5 (8GB)
# - 256GB NVMe SSD via Pi HAT (not SD card!)
# - Official power supply (5V 5A)

# Flash Raspberry Pi OS Lite (64-bit)
# Enable SSH, set hostname to 'openclaw'

# After first boot
sudo apt update && sudo apt upgrade -y
sudo apt install -y git nodejs npm

# Install OpenClaw
git clone https://github.com/openclaw/openclaw.git
cd openclaw && npm install

# Run the onboard wizard
openclaw onboard

# Configure Copilot auth
openclaw configure --section copilot

# Start as a service
openclaw gateway install-service
openclaw gateway start

Power Consumption & Cost

The Pi 5 draws about 5-8W under typical OpenClaw load. That’s roughly:

  • $5-8/year in electricity
  • $10/month for Copilot Pro (includes GPT-5-mini)
  • $0 for the model inference itself

Compare that to a Mac Mini M4 Pro 24GB:

  • $600-800 hardware cost
  • Still needs API models for best quality (local models on 24GB unified memory are limited to ~13B parameter models at decent quantization)
  • 15-30W power draw

When Local Models Make Sense

I won’t pretend API-only is always the answer. Local models win when:

  1. Privacy is non-negotiable — air-gapped environments, sensitive data
  2. Latency matters more than quality — real-time autocomplete, fast iterations
  3. You’re offline frequently — field work, travel without reliable internet
  4. You want to experiment — fine-tuning, custom models, research

For those cases, a Mac Mini M4 Pro with 48GB+ unified memory is the better bet. But for most people running OpenClaw as a personal assistant? The Pi + API combo is unbeatable.

My Recommendation

Start with a Raspberry Pi 5 and GPT-5-mini. Spend the money you saved on a Copilot Pro subscription instead of hardware. If you hit the limits — maybe you need vision models, or you’re sending 500+ messages a day and hitting rate limits — then consider upgrading.

The AI hardware landscape is moving fast. The 64GB machine you buy today might be obsolete in 18 months when local models improve. But a $35 Pi running cloud models? That stays relevant as long as the APIs keep getting better — and they will.

# Quick health check after setup
openclaw status

# Expected output on Pi 5:
# Gateway: running (pid 1234)
# Model: github-copilot/gpt-5-mini
# Channels: discord (connected), whatsapp (connected)
# Memory: 210MB / 8192MB
# Uptime: 14d 6h 32m

The best AI agent setup isn’t the most expensive one. It’s the one that just works, every day, without thinking about it.

Luca Berton Ansible Pilot Ansible by Example Open Empower K8s Recipes Terraform Pilot CopyPasteLearn ProteinLens TechMeOut