v0.1.9 · 153+ tests green · MIT licensed

Your self-evolving
personal AI, ready for you.

EvoClaw runs the agent loop on your laptop. It learns from every task, scrubs secrets before they reach the model API, and cuts long-session token cost by roughly 70 percent. ACP + MCP standard interop. Open source, MIT.

→ Run EvoClaw now ★ Star on GitHub
Rust 1.80+ macOS · Linux · Windows ~8K LOC core Zero telemetry
The contrast

Without EvoClaw vs. With EvoClaw

Hosted-SaaS agents log every prompt, store your code on their servers, and lose the trail when you ask "what happened last Tuesday." EvoClaw rewires that loop on your machine.

Without EvoClaw
$ cat ~/.bashrc | grep -i token | curl -X POST acme-ai.com/chat
!! prompt logged to acme-ai.com:443 — every byte indexed
$ acme-ai chat "fix the prod migration"
!! GHP_xxxx leaked in tool args; shows in vendor dashboard
$ acme-ai replay last-tuesday
ERROR: session retention 24h on the free tier
$ acme-ai run --offline
ERROR: backend unreachable, account paused
14 prompts ago · cost: $1.84 · cache: who knows
With EvoClaw
  • Local first. The only outbound packet is to the model API you chose. Everything else stays under ~/.evoclaw/.
  • Secrets never reach the model. Two-layer redactor scrubs prompts, tool args, and assistant text at six boundary points.
  • Replay any session. Append-only JSONL per task. evo replay rehydrates last Tuesday's run, byte for byte.
  • Offline mode. Point model.base_url at Ollama, vLLM, or llama.cpp — same loop, no internet.
  • ~⅓ of the naive token spend. Five stacked tricks; cache-hit-rate stays above 60 percent.
What's in the box

Eight load-bearing capabilities.

Every primitive that makes a long-running agent runtime trustworthy enough to leave unattended on your real machine.

🌳

Self-evolving Skill Tree

YAML skills with five-state EWMA lifecycle (Draft → Candidate → Active → Degraded → Deprecated). Active skills feed the next planner round.

🔒

Secret-redaction barrier

Vault-registered values become ${SECRET:NAME}; unregistered shapes become [REDACTED:kind:fp]. Idempotent at six boundary points.

📉

Token economy

Schema fingerprint, ephemeral cache, summary protocol, head+tail truncation, periodic compression. Long task ≈ ⅓ of naive spend.

🧠

Layered memory L0–L5

Scratch, prefs, env facts (90-day age-out), task records, reflections, cold archive. JSONL + grep on purpose — recall without prompt cost.

🧾

Append-only JSONL replay

One log per task, one JSON per record. evo replay rehydrates any session; doctor closure audits log integrity.

💰

Three-tier budget engine

Per-task hard stop, per-day soft warn + hard cap (4×), per-month hard cap. doctor tokens reports 7-day / 30-day spend.

🛡

P0–P8 permission ladder

Ordered ladder enforced inside evo-policy::Permission. Default ceiling P1; channel senders hard-capped at P4.

One CLI, two binaries

evoclaw long form, evo 3-letter alias — same library. Type the binary alone to drop into the REPL.

How it works

From clone to first task in three steps.

Build, configure, run. The wizard takes care of the rest.

Build

Clone the repo and let cargo do its thing. Single static binary, ~30 s on a warm cache.

# Rust 1.80+
git clone https://github.com/DevEloLin/evoclaw
cd evoclaw && cargo build --release

Configure

First launch runs an interactive wizard: pick a provider (17 vendors), drop in an API key, choose a model.

./target/release/evoclaw
# wizard saves ~/.evoclaw/config.toml
# keys go to ~/.evoclaw/secrets/vault.json (chmod 600)

Run

Type a task. EvoClaw plans, calls tools, observes, replans, finishes — then quietly distills a Skill.

evo run "diagnose why my SSH hangs"
evo replay   # full reflection trace
evo skill tree
Integrations

Works with everything you already use.

17 model vendors, 7 ACP coding agents, 7 MCP servers, 4 local runtimes — all standard-protocol, no proprietary glue.

DeepSeekKimiQwen DoubaoZhipuBaidu MiniMaxStepFunHunyuan OpenAIAnthropicGemini MistralGroqOpenRouter TogetherFireworks
Claude CodeCodexCursor GitHub CopilotGemini CLI AiderQwen Code

Set provider = "acp:claude" in ~/.evoclaw/config.toml — EvoClaw spawns the upstream CLI as a subprocess and routes prompts via JSON-RPC over stdio.

filesystemGitHubfetch timeBrave Search PostgresSlack + bring your own

Tools surface as mcp__server__tool in the registry. Auth env vars are captured at mcp add time and never reach the model.

OllamavLLM llama.cppany OpenAI-compatible URL

Point model.base_url at http://localhost:11434/v1 for Ollama, or any local server. Skip the API key. Redactor and skill tree still work.

Quickstart

Pick how you want to install.

No registry account. No telemetry. No "create an account first."

# macOS · Linux · Windows — needs Rust 1.80+
git clone https://github.com/DevEloLin/evoclaw && cd evoclaw
cargo build --workspace --release
./target/release/evoclaw                 # first launch runs interactive setup wizard
Drops you in the REPL after the wizard saves ~/.evoclaw/config.toml.
# Pulls and installs both binaries (evoclaw + evo) into ~/.cargo/bin
cargo install --locked --git https://github.com/DevEloLin/evoclaw evo-cli

evoclaw                                  # long form
evo                                      # 3-letter alias — same code path
Direct-from-git is the supported path until v0.2.0 publishes to crates.io.
# Register a secret — value never leaves your machine
evo secret add github_pat ghp_yourActualValueHere

# Run a task
evo run "diagnose why my SSH hangs intermittently"

# Replay any past task — full reflection / cost / tool trace
evo replay
Want a local web chat? evo gateway --bind 127.0.0.1:7878 --token mychat opens a Bearer-protected page on localhost.
Live demo

This is what a session looks like.

Plan, run tools, observe, replan — then a Skill is quietly distilled to disk.

evo · ~/work/myrepo
╔══════════════════════════════════════════════════════════════════╗ ║ ███████╗██╗ ██╗ ██████╗ ██████╗██╗ █████╗ ██╗ ║ ║ ██╔════╝██║ ██║██╔═══██╗██╔════╝██║ ██╔══██╗██║ ║ ║ █████╗ ██║ ██║██║ ██║██║ ██║ ███████║██║ ║ ║ ██╔══╝ ╚██╗ ██╔╝██║ ██║██║ ██║ ██╔══██║██║ ║ ║ ███████╗ ╚████╔╝ ╚██████╔╝╚██████╗███████╗██║ ██║███████╗║ ║ local-first · self-evolving · v0.1.9 ║ ╚══════════════════════════════════════════════════════════════════╝ ┌─ context ─────────────────────────────────────────────────────┐ │ home : ~/.evoclaw │ │ provider: deepseek (https://api.deepseek.com/v1) │ │ model : deepseek-chat │ │ api key : ok · secrets file: ~/.evoclaw/secrets/deepseek.key │ │ skills : 12 loaded · 3 ACTIVE │ └───────────────────────────────────────────────────────────────┘ summarise every Cargo.toml under ~/work, write to cargo-toml-summary.txt planning... [cache hit · 8/12 tools fingerprinted] tool call: list_dir("~/work") + 12ms tool call: read_file × 7 + 84ms · head/tail truncated tool call: write_file(cargo-toml-summary.txt, ...) + 6ms done in 4 turns · $0.0021 · cache 73% === final === Wrote 7 paths to cargo-toml-summary.txt. Roots: evoclaw, my-other-repo, ... reflection: distilled new skill cargo-toml-aggregateDraft log saved: ~/.evoclaw/logs/task-20260503T012245.823.jsonl _
Reference

Architecture & design diagrams.

Two pages, four languages. Architecture covers every module on the same canvas; Design covers every state machine, sequence, and closure rule.

Side by side

Hosted SaaS vs EvoClaw.

The same questions, asked seven different ways.

QuestionHosted SaaS agentEvoClaw
Where does my code go?Their backend on every promptThe model API you chose; everything else stays local
Where do my secrets go?Their logs if you paste themvault.json chmod 600; model only sees placeholders
Where do my logs go?Their server, indexed~/.evoclaw/logs/; you delete them
Can I replay last Tuesday's run?If they kept itevo replay /path/to/log.jsonl
Can I run offline?NoYes — point base_url at Ollama / vLLM / llama.cpp
Can I work on regulated content?Probably notYes — every byte is on your machine
Vendor pivot proof?Start overSkills, memory, config keep working
FAQ

Frequently asked.

Quick answers. Long ones live in the docs.

What is EvoClaw, in one sentence?

A local-first agent runtime in Rust that learns from every task, proves its work in append-only JSONL logs, and never lets a secret you typed reach the model API.

Does it work offline?

Yes. Point model.base_url at http://localhost:11434/v1 for Ollama, or any local vLLM / llama.cpp server. Skip the API key step. The redactor and skill tree still work.

Will my skills survive a re-clone of the repo?

Yes. State lives in ~/.evoclaw/, not in the repo. Wipe and rebuild EvoClaw all you want; your skills, memory, and vault stay put.

Can I share a session log with a coworker?

Yes. Every text field is scrubbed before it's written, so the JSONL is safe to share without manual review.

What stops the model from echoing my secret right back into the log?

The redactor scrubs both on the way to the model and on the way back from it. If the model surprises us by echoing a registered value, the assistant text gets scrubbed again before it lands in the JSONL.

Why no vector database?

Vector retrieval costs prompt tokens at retrieval time. Substring matching against typed memory layers (L1 prefs, L2 env facts, L3 task records) hits the same recall on local-machine workloads at a fraction of the prompt cost. If you want vectors, plug in an MCP server with a vector backend.

Why Rust?

Single static binary, no runtime, no GC pauses; the type system catches a class of bugs that matter in a long-running agent runtime. We get all of that without the C++ tax.

How do I switch model providers?

Run evoclaw login and pick a different one, or edit ~/.evoclaw/config.toml directly. EvoClaw supports DeepSeek, Kimi, Qwen, OpenAI, OpenRouter, Anthropic, GitHub Copilot, Ollama, and any OpenAI-compatible endpoint.

Start your AI journey today.

Get your personal AI assistant in minutes. No account, no telemetry — just a single Rust binary.