PraisonAI provides CLI commands for profiling agent performance without modifying code.
Quick Start
# Quick inline profiling with timeline diagram
praisonai " What is 2+2? " --profile
# Deep profiling with call graph
praisonai " What is 2+2? " --profile --profile-deep
# Profile a query with detailed timing breakdown
praisonai profile query " What is 2+2? "
# Profile with file grouping
praisonai profile query " Hello " --show-files --limit 20
# Profile startup time
praisonai profile startup
# Profile import times
praisonai profile imports
# Run comprehensive profiling suite
praisonai profile suite --quick
Inline Profiling (—profile flag)
The simplest way to profile any command is with the --profile flag:
praisonai " Your prompt here " --profile
This outputs a visual timeline diagram showing execution phases:
======================================================================
PraisonAI Profile Report
======================================================================
Run ID: abc12345
Timestamp: 2026-01-02T05:38:01.749771Z
Method: cli_direct
Version: 3.0.3
## Timeline Diagram
ENTER ─────────────────────────────────────────────────────────► RESPONSE
│ imports │ init │ network │
│ 843ms │ 0ms │ 1414ms │
└─────────────────┴───────┴──────────────────────────────┘
TOTAL: 2257ms
## Execution Timeline
---------------------------------------------
Imports : 843.23 ms
Agent Init : 0.18 ms
Execution : 1413.50 ms
───────────────────────────────────────────
⏱ Time to First Response : 2256.91 ms
TOTAL : 2257.09 ms
Deep Profiling
For detailed function-level analysis with call graphs:
praisonai " Your prompt " --profile --profile-deep
This adds:
Decision Trace : Agent config, model, streaming mode, tools
Top Functions : Cumulative time by function
Module Breakdown : Time grouped by module category
Call Graph : Caller/callee relationships
JSON Output
Get machine-readable profile data:
praisonai " Your prompt " --profile --profile-format json
The JSON output includes the timeline diagram as a string field for easy parsing.
Commands
profile query
Profile a query execution with detailed timing breakdown:
praisonai profile query " Your prompt here " [OPTIONS]
Options:
Option Short Description --model-mModel to use --stream/--no-streamUse streaming mode --deepEnable deep call tracing (higher overhead) --limit-nTop N functions to show (default: 30) --sort-sSort by: cumulative or tottime --show-filesGroup timing by file/module --show-callersShow caller functions --show-calleesShow callee functions --importtimeShow module import times --first-tokenTrack time to first token (streaming) --saveSave artifacts to path (.prof, .txt) --format-fOutput format: text or json
Examples:
# Basic profiling with console output
praisonai profile query " Write a poem about AI "
# Profile with file grouping
praisonai profile query " Hello " --show-files --limit 15
# Save JSON report
praisonai profile query " Analyze sentiment " --format json --save=./profile_results
# Track time to first token in streaming mode
praisonai profile query " Test " --stream --first-token
# Deep call tracing with caller/callee info
praisonai profile query " Test " --deep --show-callers --show-callees
Output:
======================================================================
PraisonAI Profile Report
======================================================================
## System Information
Timestamp: 2025-12-31T17:37:46.662247Z
Python Version: 3.12.11
Platform: macOS-15.7.4-arm64-arm-64bit
PraisonAI: 2.9.2
Model: default
## Timing Breakdown
CLI Parse: 0.00 ms
Imports: 867.21 ms
Agent Construct: 0.06 ms
Model Init: 0.00 ms
Total Run: 2302.64 ms
## Per-Function Timing (Top Functions)
----------------------------------------------------------------------
Function Calls Cumulative (ms) Self (ms)
----------------------------------------------------------------------
start 1 2302.57 0.03
chat 1 2302.54 0.03
_chat_completion 1 2302.45 0.02
...
======================================================================
profile imports
Profile module import times to identify slow imports:
praisonai profile imports
Output:
======================================================================
Import Time Analysis
======================================================================
Module Self (μs) Cumul (μs)
----------------------------------------------------------------------
praisonaiagents 624 1772006
praisonaiagents.workflows 280 1617822
praisonaiagents.agent.agent 30 1569219
openai 1163 1369693
openai.types 1679 786129
...
----------------------------------------------------------------------
Total import time: 1772.01 ms
profile startup
Profile CLI startup time (cold and warm):
praisonai profile startup
Output:
==================================================
Startup Time Analysis
==================================================
Cold Start: 62.84 ms
Warm Start: 79.25 ms
==================================================
profile suite
Run a comprehensive profiling suite with multiple scenarios:
praisonai profile suite [OPTIONS]
Options:
Option Short Default Description --output-o/tmp/praisonai_profile_suiteOutput directory for results --iterations-n3 Iterations per scenario --quickfalse Quick mode (fewer iterations)
Examples:
# Full suite (4 scenarios, 3 iterations each)
praisonai profile suite
# Quick mode (2 scenarios, 1 iteration)
praisonai profile suite --quick
# Custom output directory
praisonai profile suite --output ./my_profile_results
# More iterations for statistical significance
praisonai profile suite --iterations 5
Output:
🔬 Running Profile Suite...
Output: /tmp/praisonai_profile_suite
Scenarios: 4
Iterations: 3
📊 Measuring startup times...
Cold: 80.38ms, Warm: 84.71ms
📊 Analyzing imports...
Top import: praisonaiagents (1908.99ms)
📊 Running scenario: simple_non_stream
Iteration 1: 6366.58ms
Total time: 6366.58ms (±0.00ms)
📊 Running scenario: simple_stream
Iteration 1: 3484.61ms
Total time: 3484.61ms (±0.00ms)
✅ Suite complete. Results saved to /tmp/praisonai_profile_suite
============================================================
Profile Suite Summary
============================================================
Startup Cold: 80.38ms
Startup Warm: 84.71ms
Top Import: praisonaiagents
Time: 1908.99ms
Scenario Results:
simple_non_stream: 6366.58ms (±0.00ms)
simple_stream: 3484.61ms (±0.00ms)
✅ Full results saved to: /tmp/praisonai_profile_suite
Output Files:
suite_results.json - Machine-readable JSON with all timing data
suite_report.txt - Human-readable summary report
Advanced Usage
Deep Call Tracing
Enable deep call tracing for detailed call graph analysis:
praisonai profile query " Test " --deep --show-callers --show-callees
Deep call tracing adds significant overhead. Use only for detailed debugging.
Save Artifacts
Save profiling artifacts for later analysis:
praisonai profile query " Test " --save=./profile_results
This creates:
profile_results.prof - Binary cProfile data (can be loaded with pstats)
profile_results.txt - Human-readable report
JSON Output
Get machine-readable output for CI/CD integration:
praisonai profile query " Test " --format json > profile.json
Streaming with First Token Tracking
Track time to first token in streaming mode:
praisonai profile query " Test " --stream --first-token
Combine with py-spy
For production-grade flamegraphs:
# Install py-spy
pip install py-spy
# Record with py-spy (requires sudo on some systems)
py-spy record -o profile.svg -- python -m praisonai " Your task "
# Or for a running process
py-spy record -o profile.svg --pid < PI D >
CI/CD Integration
Add profiling to your CI pipeline:
# .github/workflows/benchmark.yml
name : Performance Benchmark
on :
push :
branches : [ main ]
jobs :
benchmark :
runs-on : ubuntu-latest
steps :
- uses : actions/checkout@v3
- name : Setup Python
uses : actions/setup-python@v4
with :
python-version : ' 3.11 '
- name : Install dependencies
run : pip install praisonai
- name : Run profile suite
run : |
praisonai profile suite --quick --output ./benchmark_results
- name : Upload results
uses : actions/upload-artifact@v3
with :
name : benchmark-results
path : ./benchmark_results/
Text Output (Default)
Human-readable format printed to terminal with timing breakdown, function stats, and response preview.
JSON Output
Machine-readable format for processing:
{
" timestamp " : " 2025-12-31T17:37:46.662247Z " ,
" metadata " : {
" python_version " : " 3.12.11 " ,
" platform " : " macOS-15.7.4-arm64-arm-64bit " ,
" praisonai_version " : " 2.9.2 " ,
" model " : " default "
},
" prompt " : " hi " ,
" response_preview " : " Hi there! How can I help... " ,
" timing " : {
" cli_parse_ms " : 0.0003 ,
" imports_ms " : 851.95 ,
" agent_construction_ms " : 0.05 ,
" model_init_ms " : 0.0001 ,
" first_token_ms " : 0.0 ,
" total_run_ms " : 5712.11
},
" top_functions " : [ ... ]
}
Best Practices
Use suite for comprehensive benchmarks
The suite command runs multiple scenarios with warmup: praisonai profile suite --iterations 5
Profile in production-like environment
Run benchmarks with similar data sizes and network conditions as production.
Use --show-files to identify hotspots
Group timing by file to find which modules are slowest: praisonai profile query " Test " --show-files --limit 30
Compare streaming vs non-streaming
Streaming often has faster time-to-first-token: praisonai profile query " Test " --stream --first-token
praisonai profile query " Test " --no-stream
Troubleshooting
Import times are dominated by OpenAI SDK. This is expected: praisonai profile imports
Consider lazy imports if startup time is critical.
High variance in benchmarks
Increase iterations in suite mode: praisonai profile suite --iterations 10
Deep tracing adds significant overhead. Use only for debugging: # Without deep tracing (faster)
praisonai profile query " Test "
# With deep tracing (slower but more detail)
praisonai profile query " Test " --deep