I have a proposal that addresses long-term memory problems for LLMs when new data arrives continuously (cheaply!). The program involves no code, but two Markdown files.
For retrieval, there is a semantic filesystem that makes it easy for LLMs to search using shell commands.
It is currently a scrappy v1, but it works better than anything I have tried.
Curious for any feedback!
alexbike4 hours ago
The markdown approach has a real advantage people underestimate: you can read and edit the memory yourself. With vector DBs and embeddings the memory becomes opaque — you can't inspect or correct what the model "knows". Plain files keep the human in the loop.
The hard part is usually knowing what +not+ to write down. Every system I've seen eventually drowns in low-signal entries.
in-silico4 hours ago
This assumes that the model's behavior and memories are faithful to their english/human language representation, and don't stray into (even subtle) "neuralese".
verdverm3 hours ago
Is there anything (besides plumbing) that prevents both? i.e. when the file is edited, all the representations are updated
dhruv3006an hour ago
I love how you approached this with markdown !
I guess the markdown approach really has a advantage over others.
PS : Something I built on markdown : https://voiden.md/
namanyayg5 hours ago
I've seen a lot of such systems come and go. One of my friends is working on probably the best (VC-funded) memory system right now.
The problem always is that when there are too many memories, the context gets overloaded and the AI starts ignoring the system prompt.
Definitely not a solved problem, and there need to be benchmarks to evaluate these solutions. Benchmarks themselves can be easily gamed and not universally applicable.
natpalmer17762 hours ago
The armchair ML engineer in me says our current context management approach is the issue. With a proper memory management system wired up to it’s own LLM-driven orchestrator, memories should be pulled in and pushed out between prompts, and ideally, in the middle of a “thinking” cycle. You can enhance this to be performant using vector databases and such but the core principle remains the same and is oft repeated by parents across the world: “Clean up your toys before you pull a new one out!”
Also since I thought for another 30 seconds, the “too many memories!” Problem imo is the same problem as context management and compaction and requires the same approach: more AI telling AI what AI should be thinking about. De-rank “memories” in the context manager as irrelevant and don’t pass them to the outer context. If a memory is de-ranked often and not used enough it gets purged.
dummydummy1234an hour ago
Mid thinking cycle seems dangerous as it will probably kill caching.
natpalmer1776an hour ago
The mid thinking cycle would require significant architecture change to current state of art and imo is a key blocker to AGI
xwowsersx2 hours ago
What is the memory system you are referring to? I've been trying Memori with OpenClaw. Haven't had a ton of time to really kick the tires on it, so the jury's still out.
sudb5 hours ago
I really like the simplicity of this! What's retrieval performance and speed like?