Paean vs OpenClaw & NanoClaw
Ultra-Minimal Agent Runtime vs Feature-Rich Frameworks
PaeanClaw is a 365-line open-source agent runtime from the Paean ecosystem. While OpenClaw ships 420,000 lines with 60+ built-in tools, and NanoClaw runs 8,000 lines behind Docker containers, PaeanClaw takes a radically different approach: an entire agent runtime that fits inside a single LLM context window — with 150x faster startup and 5x less memory.
Why Choose Paean
Feature Comparison
| Feature | Paean | OpenClaw & NanoClaw |
|---|---|---|
| Cold Start | ~20ms (Bun) / ~40ms (Node) | OpenClaw: ~3s / NanoClaw: ~5s (container) |
| Memory Baseline (RSS) | ~30MB (Bun) | OpenClaw: ~150MB / NanoClaw: ~200MB |
| Install Time | ~5s (zero native compile) | OpenClaw: ~5 min / NanoClaw: ~2 min |
| Dist Size | ~0.8MB source | OpenClaw: ~28MB / NanoClaw: ~4MB |
| Runtime Dependencies | 2 (MCP SDK + grammy) | OpenClaw: ~50 / NanoClaw: 9 |
| Native Addons | 0 (Bun) — no node-gyp, no C++ compiler | OpenClaw: Several / NanoClaw: 3+ |
| Deployment Cost | $0 — runs on any hardware incl. Raspberry Pi | OpenClaw: ~$20+/mo / NanoClaw: ~$5/mo |
| Core Source Lines | 365 lines (5 files) | OpenClaw: ~420,000 / NanoClaw: ~8,000 |
| LLM Providers | Any OpenAI-compatible API | OpenClaw: Pi multi-model / NanoClaw: Claude only |
| AI-Assisted Customization | Trivial — full codebase in context | Challenging — partial context only |
| Channel Coverage | PWA + Telegram (+ skills) | OpenClaw: 16+ platforms natively |
| Built-in Tools | 0 (MCP ecosystem) | OpenClaw: 60+ / NanoClaw: 7 via SDK |
Frequently Asked Questions
Why does PaeanClaw start 150x faster than OpenClaw?
PaeanClaw is a single Bun/Node process with 2 dependencies and zero native addons. OpenClaw loads ~50 packages, initializes a plugin runtime, and spins up a gateway server. The ~20ms vs ~3s difference is the natural result of shipping 365 lines instead of 420,000.
Can PaeanClaw really run on a Raspberry Pi?
Yes. PaeanClaw's ~30MB memory footprint and instant startup make it comfortable on any ARM board. All you need is Bun (or Node.js) and an LLM API key. OpenClaw's ~150MB baseline and heavy install process make it impractical on low-resource hardware.
Why does PaeanClaw have only 365 lines when OpenClaw has 420,000?
PaeanClaw is designed for the agentic era where AI coding assistants modify and extend code. At 365 lines, the entire codebase fits within a single LLM context window (~4K tokens), meaning any AI assistant can read, understand, and safely modify the entire system. OpenClaw optimizes for feature completeness; PaeanClaw optimizes for comprehensibility and safe AI-driven customization.
Why should I choose PaeanClaw over NanoClaw?
NanoClaw offers Docker-level isolation and is tightly integrated with the Claude Agent SDK, which is excellent if you use Claude exclusively. PaeanClaw offers provider freedom — use any OpenAI-compatible API including local models via Ollama — with zero native compilation on Bun and a ~20ms cold start. Choose based on whether you prioritize container isolation (NanoClaw) or provider flexibility and minimalism (PaeanClaw).
The Verdict
Choose OpenClaw for maximum built-in features and broad platform support. Choose NanoClaw for container-isolated agents with Claude. Choose PaeanClaw for an ultra-minimal, provider-agnostic agent runtime with 150x faster startup — one that you or your AI coding assistant can fully understand, modify, and own.
Ready to Experience the Difference?
Try Paean's 24h life context capture and see how external memory management transforms your productivity.
Get Started with Paean