2026-01-09 02:21:17 +00:00
---
2026-01-30 03:15:10 +01:00
summary: "How OpenClaw builds prompt context and reports token usage + costs"
2026-01-09 02:21:17 +00:00
read_when:
- Explaining token usage, costs, or context windows
- Debugging context growth or compaction behavior
2026-01-31 16:04:03 -05:00
title: "Token Use and Costs"
2026-01-09 02:21:17 +00:00
---
2026-01-31 21:13:13 +09:00
2026-01-09 02:21:17 +00:00
# Token use & costs
2026-01-30 03:15:10 +01:00
OpenClaw tracks **tokens** , not characters. Tokens are model-specific, but most
2026-01-09 02:21:17 +00:00
OpenAI-style models average ~4 characters per token for English text.
## How the system prompt is built
2026-01-30 03:15:10 +01:00
OpenClaw assembles its own system prompt on every run. It includes:
2026-01-09 02:21:17 +00:00
- Tool list + short descriptions
- Skills list (only metadata; instructions are loaded on demand with `read` )
- Self-update instructions
2026-02-16 12:04:53 -05:00
- Workspace + bootstrap files (`AGENTS.md` , `SOUL.md` , `TOOLS.md` , `IDENTITY.md` , `USER.md` , `HEARTBEAT.md` , `BOOTSTRAP.md` when new, plus `MEMORY.md` and/or `memory.md` when present). Large files are truncated by `agents.defaults.bootstrapMaxChars` (default: 20000), and total bootstrap injection is capped by `agents.defaults.bootstrapTotalMaxChars` (default: 150000). `memory/*.md` files are on-demand via memory tools and are not auto-injected.
2026-01-09 02:21:17 +00:00
- Time (UTC + user timezone)
- Reply tags + heartbeat behavior
- Runtime metadata (host/OS/model/thinking)
See the full breakdown in [System Prompt ](/concepts/system-prompt ).
## What counts in the context window
Everything the model receives counts toward the context limit:
- System prompt (all sections listed above)
- Conversation history (user + assistant messages)
- Tool calls and tool results
- Attachments/transcripts (images, audio, files)
- Compaction summaries and pruning artifacts
- Provider wrappers or safety headers (not visible, but still counted)
2026-02-18 00:56:57 +01:00
For images, OpenClaw downscales transcript/tool image payloads before provider calls.
Use `agents.defaults.imageMaxDimensionPx` (default: `1200` ) to tune this:
- Lower values usually reduce vision-token usage and payload size.
- Higher values preserve more visual detail for OCR/UI-heavy screenshots.
2026-01-15 01:09:21 +00:00
For a practical breakdown (per injected file, tools, skills, and system prompt size), use `/context list` or `/context detail` . See [Context ](/concepts/context ).
2026-01-09 02:21:17 +00:00
## How to see current token usage
Use these in chat:
2026-01-09 03:14:39 +00:00
- `/status` → **emoji‑ rich status card** with the session model, context usage,
2026-01-09 02:21:17 +00:00
last response input/output tokens, and **estimated cost** (API key only).
2026-01-18 05:35:22 +00:00
- `/usage off|tokens|full` → appends a **per-response usage footer** to every reply.
2026-01-09 02:21:17 +00:00
- Persists per session (stored as `responseUsage` ).
- OAuth auth **hides cost** (tokens only).
2026-01-30 03:15:10 +01:00
- `/usage cost` → shows a local cost summary from OpenClaw session logs.
2026-01-09 02:21:17 +00:00
Other surfaces:
2026-01-18 05:35:22 +00:00
- **TUI/Web TUI:** `/status` + `/usage` are supported.
2026-01-30 03:15:10 +01:00
- **CLI:** `openclaw status --usage` and `openclaw channels list` show
2026-01-09 02:21:17 +00:00
provider quota windows (not per-response costs).
## Cost estimation (when shown)
Costs are estimated from your model pricing config:
```
models.providers.< provider > .models[].cost
```
These are **USD per 1M tokens** for `input` , `output` , `cacheRead` , and
2026-01-30 03:15:10 +01:00
`cacheWrite` . If pricing is missing, OpenClaw shows tokens only. OAuth tokens
2026-01-09 02:21:17 +00:00
never show dollar cost.
2026-01-21 20:23:30 +00:00
## Cache TTL and pruning impact
2026-01-30 03:15:10 +01:00
Provider prompt caching only applies within the cache TTL window. OpenClaw can
2026-01-21 20:23:30 +00:00
optionally run **cache-ttl pruning** : it prunes the session once the cache TTL
has expired, then resets the cache window so subsequent requests can re-use the
freshly cached context instead of re-caching the full history. This keeps cache
write costs lower when a session goes idle past the TTL.
Configure it in [Gateway configuration ](/gateway/configuration ) and see the
behavior details in [Session pruning ](/concepts/session-pruning ).
Heartbeat can keep the cache **warm** across idle gaps. If your model cache TTL
is `1h` , setting the heartbeat interval just under that (e.g., `55m` ) can avoid
re-caching the full prompt, reducing cache write costs.
2026-02-23 18:45:30 +00:00
In multi-agent setups, you can keep one shared model config and tune cache behavior
per agent with `agents.list[].params.cacheRetention` .
2026-02-23 19:06:47 +00:00
For a full knob-by-knob guide, see [Prompt Caching ](/reference/prompt-caching ).
2026-01-21 20:23:30 +00:00
For Anthropic API pricing, cache reads are significantly cheaper than input
tokens, while cache writes are billed at a higher multiplier. See Anthropic’ s
prompt caching pricing for the latest rates and TTL multipliers:
2026-02-06 10:08:59 -05:00
[https://docs.anthropic.com/docs/build-with-claude/prompt-caching ](https://docs.anthropic.com/docs/build-with-claude/prompt-caching )
2026-01-21 20:23:30 +00:00
### Example: keep 1h cache warm with heartbeat
```yaml
agents:
defaults:
model:
2026-02-05 16:54:44 -05:00
primary: "anthropic/claude-opus-4-6"
2026-01-21 20:23:30 +00:00
models:
2026-02-05 16:54:44 -05:00
"anthropic/claude-opus-4-6":
2026-01-21 20:23:30 +00:00
params:
2026-02-01 23:16:37 +09:00
cacheRetention: "long"
2026-01-21 20:23:30 +00:00
heartbeat:
every: "55m"
```
2026-02-23 18:45:30 +00:00
### Example: mixed traffic with per-agent cache strategy
```yaml
agents:
defaults:
model:
primary: "anthropic/claude-opus-4-6"
models:
"anthropic/claude-opus-4-6":
params:
cacheRetention: "long" # default baseline for most agents
list:
- id: "research"
default: true
heartbeat:
every: "55m" # keep long cache warm for deep sessions
- id: "alerts"
params:
cacheRetention: "none" # avoid cache writes for bursty notifications
```
`agents.list[].params` merges on top of the selected model's `params` , so you can
override only `cacheRetention` and inherit other model defaults unchanged.
2026-02-18 03:28:56 +01:00
### Example: enable Anthropic 1M context beta header
Anthropic's 1M context window is currently beta-gated. OpenClaw can inject the
required `anthropic-beta` value when you enable `context1m` on supported Opus
or Sonnet models.
```yaml
agents:
defaults:
models:
"anthropic/claude-opus-4-6":
params:
context1m: true
```
This maps to Anthropic's `context-1m-2025-08-07` beta header.
2026-03-01 22:35:12 +00:00
This only applies when `context1m: true` is set on that model entry.
Requirement: the credential must be eligible for long-context usage (API key
billing, or subscription with Extra Usage enabled). If not, Anthropic responds
with `HTTP 429: rate_limit_error: Extra usage is required for long context requests` .
2026-02-23 12:29:09 -05:00
If you authenticate Anthropic with OAuth/subscription tokens (`sk-ant-oat-*` ),
OpenClaw skips the `context-1m-*` beta header because Anthropic currently
rejects that combination with HTTP 401.
2026-01-09 02:21:17 +00:00
## Tips for reducing token pressure
- Use `/compact` to summarize long sessions.
- Trim large tool outputs in your workflows.
2026-02-18 00:56:57 +01:00
- Lower `agents.defaults.imageMaxDimensionPx` for screenshot-heavy sessions.
2026-01-09 02:21:17 +00:00
- Keep skill descriptions short (skill list is injected into the prompt).
- Prefer smaller models for verbose, exploratory work.
See [Skills ](/tools/skills ) for the exact skill list overhead formula.