Skip to main content

Memory Crystal

All your AIs share one memory. Private, searchable, and yours. Memory Crystal lets all your AIs remember you … together. You use multiple AIs. They don’t talk to each other. They can’t search what the others know. Have you ever thought to yourself … why isn’t this all connected? Memory Crystal fixes this. All your AIs share one memory. Searchable and private. Anywhere in the world.

Teach Your AI to Remember You

Open your AI and say:
Read wip.computer/install/wip-memory-crystal.txt

Then explain:
1. What is Memory Crystal?
2. What does it install on my system?
3. What changes for us? (this AI)
4. What changes across all my AIs?

Check if Memory Crystal is already installed.

If it is, show me what I have and what's new.

Then ask:
- Do you have questions?
- Want to see a dry run?

If I say yes, run: crystal init --dry-run

Show me exactly what will change. Don't install anything until I say "install".
Your AI will read the repo, explain everything, and walk you through setup interactively.

Features

Local Memory

All your AI conversations stored locally, searchable in one place. Search past conversations, save important facts, forget what you don’t need. Your complete memory. It stays with you, shared across all your AIs.
  • Stable
    • Verified: Claude Code CLI + OpenClaw
    • Unverified: Other MCP-compatible clients and CLIs

Multi-Device Memory

AIs set up as Crystal Nodes relay their memories back to your Crystal Core. Your Crystal Core relays all memories back to every node. End-to-end encrypted. Your Crystal Core is the source of truth. Your node copy can be wiped and rebuilt at any time. Uses Cloudflare infrastructure to transfer encrypted data between your devices:
  • Hosted: Use WIP.computer relay infrastructure. Currently free for individual use
  • Self-hosted: Deploy your own relay on your own Cloudflare account. Full sovereignty
Read more about Relay: Memory Sync. Beta (early access).

AI-to-AI Communication

Your AIs talk to each other on the same machine or across your network. All messages are saved to Memory Crystal automatically. Read more about Bridge: AI-to-AI Communication. Beta (early access).

Intelligent Install

When installing from Claude Code CLI or OpenClaw, Memory Crystal discovers your existing AI sessions automatically. Sets up LDM OS and creates a living memory system. From this point forward, every conversation is captured, archived, and made searchable. Choose to install as Crystal Core (all your memories) or Crystal Node (a mirror of your Core).

Import Memories

Total Recall … Connect your AI accounts (Anthropic, OpenAI, xAI/Grok). Every conversation gets pulled and run through the Dream Weaver Protocol, consolidating them into Memory Crystal as truly lived, searchable memories. Beta (early access).

Memory Consolidation

Dream Weaver Protocol … Your AI relives all your conversations, figures out what matters most, and carries the weight forward. Like dreaming, the AI consolidates memories for better understanding. Read the paper: Dream Weaver Protocol PDF. Stable.

Backups

Automated backups of all of your memories to a directory and location of your choosing: iCloud, external drive, Dropbox, or wherever you trust. Beta (early access).

Technical Documentation

How Memory Crystal works under the hood. Architecture, design decisions, integrations, encryption, search, and everything a developer would want to know.

How Does It Work?

Memory Crystal captures every conversation you have with any AI, embeds it into a local SQLite database, and makes it searchable with hybrid search (keyword + semantic). One database file. Runs on your machine. Nothing leaves your device unless you set up multi-device sync.

Five-Layer Memory Stack

LayerWhatHow
L1: Raw TranscriptsEvery conversation archived as JSONLAutomatic capture (cron, hooks, integrations)
L2: Search IndexChunks embedded into crystal.dbAutomatic. Hybrid search (BM25 + vector + RRF)
L3: Structured MemoryFacts, preferences, decisionscrystal_remember / crystal_forget
L4: Narrative ConsolidationDream Weaver journals, personality, soulcrystal dream-weave (via Dream Weaver Protocol)
L5: Active Working ContextStartup files, shared contextYour AI reads on startup
Every conversation produces three artifacts:
  1. JSONL transcript … the raw session, archived to disk
  2. Markdown summary … title, summary, key topics (generated by LLM or simple extraction)
  3. Vector embeddings … chunked, embedded, and stored in crystal.db for search

Claude Code CLI Integration

Two capture paths work together. The poller is primary. The Stop hook is redundancy. Continuous Capture (Primary): A cron job runs cc-poller.ts every minute. It reads Claude Code’s JSONL transcript files via byte-offset watermarking (only reads new data since last capture) and produces all three artifacts in a single pass. Stop Hook (Redundancy): The Claude Code stop hook runs after every response. It checks the watermark and flushes anything the poller missed. If the poller already captured everything, the stop hook is a no-op. Why both? The stop hook only fires when a session ends. Long sessions, remote disconnects, and context compactions never trigger it. The poller decouples capture from the session lifecycle entirely.

OpenClaw Integration

Memory Crystal works as a background integration for OpenClaw. It registers tools (crystal_search, crystal_remember, crystal_forget, crystal_status) and an agent_end hook that captures conversations after every AI turn.

Other Integrations

Any tool that can run shell commands or connect via MCP can use Memory Crystal.
  • Connection Point … exposes crystal_search, crystal_remember, crystal_forget, crystal_status, crystal_sources_add, crystal_sources_sync, crystal_sources_status. Works with Claude Desktop, Claude Code, or any MCP-compatible AI app.
  • CLIcrystal search "query" from any terminal.
  • Moduleimport { MemoryCrystal } from 'memory-crystal' for Node.js integration.

Crystal Core and Crystal Node

Memory Crystal uses a Core/Node architecture for multi-device setups:
  • Crystal Core … your primary memory. All conversations, all embeddings, all memories. This is the database you cannot lose. Install it on something permanent: a desktop, a home server, a Mac mini.
  • Crystal Node … a synced copy on any other device. Captures conversations, sends them to the Core via encrypted relay. Gets a mirror back for local search. If a node dies, nothing is lost. The Core has everything.
One Core, many Nodes. The Core does embeddings. Nodes just capture and sync. Two-tier search system. Fast path (hybrid search) runs by default. Deep search adds AI-powered query expansion and re-ranking for higher quality results. Fast Path (Hybrid Search):
  1. Query goes to both FTS5 (keyword match) and sqlite-vec (vector similarity)
  2. FTS5 returns BM25-ranked results, normalized to [0..1)
  3. sqlite-vec returns cosine-distance results
  4. Reciprocal Rank Fusion merges both lists with tiered weights (BM25 2x, vector 1x)
  5. Recency weighting applied on top
  6. Final results sorted by combined score
Deep Search (AI-Powered, default):
  1. Strong signal detection: BM25 probe first. If top score is high enough, skip expansion
  2. Query expansion: LLM generates 3 variations (lexical, vector, HyDE)
  3. RRF merge: All results from original + expanded queries fused
  4. LLM re-ranking: Top 40 candidates scored for relevance
  5. Position-aware blending: Trusts RRF for top positions, lets the reranker fix ordering in the tail
LLM Provider Cascade:
PriorityProviderCostSpeed
0MCP Sampling (if client supports it)Included in Max subscriptionFast
1MLX (local, Apple Silicon)FreeFastest
2Ollama (local)FreeFast
3OpenAI API~$0.001/searchNetwork-dependent
4Anthropic API (direct key only)~$0.001/searchNetwork-dependent
5NoneFreeN/A (fast path only)
Local-first by default. API keys are the fallback, not the primary path.

Encryption

For multi-device sync. All encryption happens on-device before anything touches the network.
  • AES-256-GCM for encryption. Authenticated encryption; tampering is detected.
  • HMAC-SHA256 for signing. Integrity verification before decryption.
  • Shared symmetric key generated locally. Never transmitted to the relay.
  • The relay stores and serves encrypted blobs. It has no decryption capability.

Database

Everything lives in one file: crystal.db. Inspectable with any SQLite tool. Backupable with cp.
TablePurpose
chunksMemory text, metadata, SHA-256 hash, timestamps
chunks_vecsqlite-vec virtual table (cosine distance vectors)
chunks_ftsFTS5 virtual table (Porter stemming, BM25 scoring)
memoriesExplicit remember/forget facts
entitiesKnowledge graph nodes
relationshipsKnowledge graph edges
capture_stateWatermarks for incremental ingestion
sourcesIngestion source metadata

Embedding Providers

ProviderModelDimensionsCost
OpenAI (default)text-embedding-3-small1536~$0.02/1M tokens
Ollamanomic-embed-text768Free (local)
Googletext-embedding-004768Free tier available

CLI Reference

# Search
crystal search <query> [-n limit] [--agent <id>] [--since <24h|7d|30d>]
  [--intent <description>] [--candidates N] [--explain]

# Remember / forget
crystal remember <text> [--category fact|preference|event|opinion|skill]
crystal forget <id>

# Status
crystal status

# Source file indexing
crystal sources add <path> --name <name>
crystal sources sync [name]
crystal sources status

# Crystal Core / Node management
crystal role
crystal promote
crystal demote

# Dream Weaver (narrative consolidation)
crystal dream-weave [--agent <id>] [--mode full|incremental] [--dry-run]

MCP Tools

ToolDescription
crystal_searchHybrid search across all memories
crystal_rememberStore a fact or observation
crystal_forgetRemove a memory by ID
crystal_statusMemory count, provider, AIs
crystal_sources_addAdd a directory for indexing
crystal_sources_syncRe-index changed files
crystal_sources_statusCollection stats

Part of LDM OS

Memory Crystal installs into LDM OS, the shared system for all your AIs. Run ldm install to see other tools you can add.

License

Dual-license model designed to keep tools free while preventing commercial resellers.
MIT      All CLI tools, connections, and background integrations (use anywhere, no restrictions).
AGPLv3   Commercial redistribution, marketplace listings, or bundling into paid services.
AGPLv3 for personal use is free. Commercial licenses available. Search architecture inspired by QMD by Tobi Lutke (MIT, 2024-2026).