Session pruning trims **old tool results** from the in-memory context right before each LLM call. It does **not** rewrite the on-disk session history (`*.jsonl`).
- If estimated context ratio ≥ `softTrimRatio`: soft-trim oversized tool results.
- If still ≥ `hardClearRatio`**and** prunable tool text ≥ `minPrunableToolChars`: hard-clear oldest eligible tool results.
### aggressive
- Always hard-clears eligible tool results before the cutoff.
- Ignores `hardClear.enabled` (always clears when eligible).
## Soft vs hard pruning
- **Soft-trim**: only for oversized tool results.
- Keeps head + tail, inserts `...`, and appends a note with the original size.
- Skips results with image blocks.
- **Hard-clear**: replaces the entire tool result with `hardClear.placeholder`.
## Tool selection
-`tools.allow` / `tools.deny` support `*` wildcards.
- Deny wins.
- Empty allow list => all tools allowed.
## Interaction with other limits
- Built-in tools already truncate their own output; session pruning is an extra layer that prevents long-running chats from accumulating too much tool output in the model context.