Skip to main content

ContextManager

Defined in the manager module.
AI Agent Unified facade for context management. Orchestrates budgeting, composition, optimization, and monitoring. Provides hooks for exact LLM boundary snapshots. Tracks optimization history for debugging.

Constructor

model
str
default:"'gpt-4o-mini'"
No description available.
config
Optional
No description available.
session_id
str
default:"''"
No description available.
agent_name
str
default:"''"
No description available.
session_cache
Optional
No description available.
llm_summarize_fn
Optional
No description available.

Methods

process()

Process messages through the context pipeline.

capture_llm_boundary()

Capture exact state at LLM call boundary.

register_snapshot_callback()

Register a callback for LLM boundary snapshots.

get_last_snapshot_hook()

Get the last LLM boundary snapshot.

estimate_tokens()

Estimate tokens with optional validation.

get_tool_budget()

Get token budget for a specific tool.

set_tool_budget()

Set token budget for a specific tool.

truncate_tool_output()

Truncate tool output according to its budget.

get_history()

Get optimization history.

get_stats()

Get current context statistics.

emergency_truncate()

Emergency truncation when optimization isn’t enough.

get_resolved_config()

Get the fully resolved configuration with source info.

reset()

Reset manager state.

Usage

manager = ContextManager(model="gpt-4o")
    
    # Process messages before LLM call
    result = manager.process(
        messages=messages,
        system_prompt=system_prompt,
        tools=tools,
    )
    
    # Get optimized messages
    optimized_messages = result["messages"]
    
    # Check if optimization occurred
    if result["optimized"]:
        print(f"Saved {result['tokens_saved']} tokens")

Source

View on GitHub

praisonaiagents/context/manager.py at line 292

Context Concept

Context Management

Context Strategies