Skip to main content
Configure context monitoring via CLI flags and interactive commands.

CLI Flags

Enable Monitoring

praisonai chat --context-monitor

Output Path

praisonai chat --context-monitor-path ./debug/context.json

Output Format

# Human-readable (default)
praisonai chat --context-monitor-format human

# JSON format
praisonai chat --context-monitor-format json

Update Frequency

# After each turn (default)
praisonai chat --context-monitor-frequency turn

# After each tool call
praisonai chat --context-monitor-frequency tool_call

# Manual only
praisonai chat --context-monitor-frequency manual

# On overflow detection
praisonai chat --context-monitor-frequency overflow

Write Mode

# Synchronous (default)
praisonai chat --context-write-mode sync

# Asynchronous (better performance)
praisonai chat --context-write-mode async

Redaction

# Enable redaction (default)
praisonai chat --context-redact

# Disable redaction (not recommended)
praisonai chat --no-context-redact

Interactive Commands

Enable Monitoring

> /context on
Output:
✓ Context monitoring enabled
Output: ./context.txt

Disable Monitoring

> /context off

Set Path

> /context path ./debug/context.json

Set Format

> /context format json

Set Frequency

> /context frequency overflow

Manual Snapshot

> /context dump
Output:
✓ Context snapshot written to: ./context.txt

Environment Variables

export PRAISONAI_CONTEXT_MONITOR=false
export PRAISONAI_CONTEXT_MONITOR_PATH=./context.txt
export PRAISONAI_CONTEXT_MONITOR_FORMAT=human
export PRAISONAI_CONTEXT_MONITOR_FREQUENCY=turn
export PRAISONAI_CONTEXT_REDACT=true

config.yaml

context:
  monitor:
    enabled: false
    path: ./context.txt
    format: human
    frequency: turn
    write_mode: sync
  redact_sensitive: true

Output Formats

Human Format

================================================================================
PRAISONAI CONTEXT SNAPSHOT
================================================================================
Timestamp: 2024-01-07T12:00:00Z
Session ID: abc123
Agent: Assistant
Model: gpt-4o-mini
Model Limit: 128,000 tokens
Output Reserve: 8,000 tokens
Usable Budget: 120,000 tokens

--------------------------------------------------------------------------------
TOKEN LEDGER
--------------------------------------------------------------------------------
Segment              Tokens     Budget     Used
system_prompt         1,200      2,000    60.0%
history              12,500     84,616    14.8%
...

JSON Format

{
  "timestamp": "2024-01-07T12:00:00Z",
  "session_id": "abc123",
  "agent_name": "Assistant",
  "model_name": "gpt-4o-mini",
  "budget": {
    "model_limit": 128000,
    "output_reserve": 8000,
    "usable": 120000
  },
  "ledger": {
    "segments": {
      "system_prompt": {"tokens": 1200, "budget": 2000}
    }
  }
}

Frequency Options

FrequencyWhen Snapshots Are Written
turnAfter each user/assistant turn
tool_callAfter each tool execution
manualOnly on /context dump
overflowWhen approaching context limit

Troubleshooting

Snapshots not appearing

# Check if monitoring is enabled
> /context config

# Enable monitoring
> /context on

# Force a snapshot
> /context dump

Sensitive data in snapshots

# Ensure redaction is enabled
praisonai chat --context-redact

# Check redaction status
> /context config

Performance issues

# Use async writes
praisonai chat --context-write-mode async

# Reduce frequency
praisonai chat --context-monitor-frequency overflow