ClawMem is the persistent memory layer that ships with Sentō. It indexes your agent's workspace (~/workspace/) and gives the agent fast retrieval over its own notes, prior conversations, and any files you've added.
Without ClawMem, your agent would start every conversation from scratch. With it, it can recall what you talked about last week.
What gets indexed
By default, ClawMem indexes:
~/workspace/CLAUDE.md— the agent's identity and rules~/workspace/memory/**/*.md— everything it has learned about you- Anything else in
~/workspace/that the agent is told to remember
It doesn't index the .openclaw/, node_modules/, or hidden directories — those are explicitly excluded.
How search works
ClawMem runs two complementary retrieval methods:
- BM25 — classic keyword search. Fast, zero dependencies.
- Gemini embeddings (optional) — semantic / vector search. Requires a free Gemini API key (set during
sento initor later viasento config).
If you skip Gemini, you get keyword-only search. That still works well for most agents — the entire workspace is typically tens to hundreds of small Markdown files, not a corpus.
Writing to memory
The agent writes to memory by itself when you tell it to:
"remember that I prefer concise answers"
It creates or updates a file under ~/workspace/memory/, categorized by type (user profile, feedback, project context, reference). The file system is the memory — no database, no remote service.
Reading memory
On every new turn, ClawMem runs a quick retrieval pass and injects the most relevant memory entries into the agent's context. The agent sees them the way it sees any other tool result and can use them in its response.
Why not a vector DB
Three reasons:
- No setup cost. ClawMem is a few files in your agent's workspace. A vector DB is a server.
- Auditable. You can read your agent's memory with
cat. There's no opaque embedding space. - Portable. The memory moves with the agent. Clone the workspace to a new host, the memory comes with it.
When you might want more
If your agent ends up indexing thousands of files (e.g. a full codebase), BM25 will still work but semantic search becomes more valuable. Connect Gemini for the embeddings — it's free within reasonable rate limits.
Files
~/workspace/memory/— the whole memory tree~/workspace/memory/MEMORY.md— the index / summary; the agent maintains this itself- Per-memory Markdown files with frontmatter declaring
type: user | feedback | project | reference