Configure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.
No changes detected in this version (1.0.4). - No file changes between previous and latest version. - No updates to features, documentation, or behavior.