Dream CE explores whether an AI memory system should do more than store, retrieve, and summarize past interactions.
A key question is: does a memory system need a “dreaming” process at all?
By “dreaming”, we do not mean mystical generation or uncontrolled speculation. We mean an offline memory consolidation process that may detect unresolved patterns, conflicts, repeated user needs, and higher-level insights from past memories.
However, this introduces an important boundary problem:
- If Dream only summarizes memories, it may not add much beyond existing summarization.
- If Dream reasons too aggressively, it may hallucinate, over-interpret user intent, or create false memories.
- If Dream never reasons, it cannot discover cross-session patterns or produce useful long-term insights.
We would like to discuss where this boundary should be.
Questions
- Should an AI memory system have a Dream-like consolidation layer?
- What kinds of reasoning should be allowed over memory?
- What should Dream never infer?
- How do we prevent over-interpretation or false memory creation?
- What is the difference between:
- summarization,
- reflection,
- abstraction,
- inference,
- hallucination?
- Should Dream output only “safe” factual memory, or can it output hypotheses with confidence and rationale?
- How much reasoning would you want your own memory system or agent to perform?
Current direction
The current Dream CE design tries to stay between two extremes:
- not just batch summarization,
- not unrestricted autonomous reasoning.
Dream actions are expected to include rationale and confidence, and persistence should only happen when the output can justify how it helps future answers.
Desired outcome
We hope to collect principles, concerns, and use cases from the community. The outcome may become design guidelines for Dream CE, especially around:
- memory safety,
- confidence and rationale,
- what can be persisted,
- what should remain only as a diary or trace,
- how much “reasoning over memory” is acceptable.
Dream CE explores whether an AI memory system should do more than store, retrieve, and summarize past interactions.
A key question is: does a memory system need a “dreaming” process at all?
By “dreaming”, we do not mean mystical generation or uncontrolled speculation. We mean an offline memory consolidation process that may detect unresolved patterns, conflicts, repeated user needs, and higher-level insights from past memories.
However, this introduces an important boundary problem:
We would like to discuss where this boundary should be.
Questions
Current direction
The current Dream CE design tries to stay between two extremes:
Dream actions are expected to include rationale and confidence, and persistence should only happen when the output can justify how it helps future answers.
Desired outcome
We hope to collect principles, concerns, and use cases from the community. The outcome may become design guidelines for Dream CE, especially around: