DAG-based summarization that never forgets — from the lossless-claw plugin
Every LLM has a context window limit. When conversations exceed it, something has to give:
LCM builds a Directed Acyclic Graph where summaries point to their source material:
flowchart TB
subgraph RAW["Raw Messages (Layer 0)"]
M1[msg 1]
M2[msg 2]
M3[msg 3]
M4[msg 4]
M5[msg 5]
M6[msg 6]
M7[msg 7]
M8[msg 8]
M9[msg 9-16]
M10[msg 17-24]
M11[msg 25-32]
end
subgraph LEAF["Leaf Summaries (Layer 1)"]
L1[Summary A
msgs 1-8]
L2[Summary B
msgs 9-16]
L3[Summary C
msgs 17-24]
end
subgraph CONDENSED["Condensed Summaries (Layer 2)"]
C1[Summary X
A + B + C]
end
subgraph ACTIVE["Active Context"]
RECENT[Recent msgs
25-32]
SUM[Summaries]
end
M1 & M2 & M3 & M4 & M5 & M6 & M7 & M8 --> L1
M9 --> L2
M10 --> L3
L1 & L2 & L3 --> C1
C1 --> SUM
M11 --> RECENT
style RAW fill:#1f2d1f,stroke:#3fb950
style LEAF fill:#2d2d1f,stroke:#d29922
style CONDENSED fill:#2d1f2d,stroke:#a371f7
style ACTIVE fill:#1f2d3d,stroke:#58a6ff
When context hits ~75% of the model's window, compaction triggers:
Oldest unprotected messages get chunked and summarized:
When enough leaf summaries accumulate, they get summarized into higher-level nodes:
Each turn, LCM assembles context from summaries + recent raw messages:
LCM gives agents tools to search and expand compacted history:
Search through all messages (raw + summarized) for keywords or patterns. Returns matching snippets with context.
Get a description of what's in the compacted history. "What topics have we covered?" without retrieving everything.
Drill into a summary to retrieve its source material. Agent sees a summary, wants details, expands it to get the original messages.
| Feature | Truncation | Mem0 (Graph) | Letta (Archival) | LCM (DAG) |
|---|---|---|---|---|
| Raw messages preserved | No | No | Yes | Yes |
| Structured extraction | No | Entities | Summaries | Summaries |
| Can drill down to original | No | No | Yes | Yes |
| Hierarchical compression | No | No | No | DAG |
| Relationship tracking | No | Yes | No | No |
| Cost per message | Zero | High | Medium | Medium* |
*LCM cost is amortized — summarization only happens when compaction triggers, not every message.
flowchart LR
subgraph WEEK1["Week 1"]
W1[120 msgs] --> S1[3 summaries]
end
subgraph WEEK2["Week 2"]
W2[95 msgs] --> S2[3 summaries]
end
subgraph WEEK3["Week 3"]
W3[140 msgs] --> S3[4 summaries]
end
subgraph WEEK4["Week 4"]
W4[80 msgs] --> S4[2 summaries]
end
S1 & S2 --> C1[Condensed
Weeks 1-2]
S3 & S4 --> C2[Condensed
Weeks 3-4]
C1 & C2 --> TOP[Month
Summary]
subgraph CONTEXT["What model sees"]
TOP
C2
S4
RECENT[Last 32 msgs]
end
style WEEK1 fill:#1f2d1f,stroke:#3fb950
style WEEK2 fill:#1f2d1f,stroke:#3fb950
style WEEK3 fill:#1f2d1f,stroke:#3fb950
style WEEK4 fill:#1f2d1f,stroke:#3fb950
style CONTEXT fill:#1f2d3d,stroke:#58a6ff
435 messages compressed into a handful of summaries + 32 recent messages
All 435 originals still in SQLite, expandable on demand
Then in your OpenClaw config:
More info: github.com/Martian-Engineering/lossless-claw
LCM makes infinite conversations possible. Instead of truncating history, it builds a navigable tree of summaries. The agent always has high-level context, and can drill down to any level of detail on demand. Nothing is ever lost.
It's like having a perfect memory with adjustable zoom — see the forest or any individual tree.
Built for the Ori group chat — March 2026