Observability Suite
PraisonAI provides a unified observability suite with support for 20+ observability providers. All integrations use lazy loading for zero performance impact when not in use.
Quick Start
Init-Only Auto-Instrumentation (Recommended)
The simplest way to add observability - just call obs.init() once and all agent operations are automatically traced:
from praisonai_tools.observability import obs
# Auto-detect provider and auto-instrument all agents
obs.init()
# That's it! Everything below is now traced automatically
from praisonaiagents import Agent
agent = Agent(name="Assistant", instructions="You are helpful.")
agent.chat("Hello!") # Auto-traced to your provider
Auto-instrumentation patches Agent.chat(), Agent.start(), Agent.run(), and Agents.start() to create spans automatically. No explicit obs.trace() wrappers needed!
Explicit Provider Selection
from praisonai_tools.observability import obs
# Specify provider explicitly
obs.init(provider="langfuse")
# Or disable auto-instrumentation for manual control
obs.init(auto_instrument=False)
Supported Providers
| Provider | Environment Variables | Install |
|---|
| AgentOps | AGENTOPS_API_KEY | pip install agentops |
| Langfuse | LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY | pip install opentelemetry-sdk opentelemetry-exporter-otlp |
| LangSmith | LANGSMITH_API_KEY | pip install opentelemetry-sdk opentelemetry-exporter-otlp |
| Traceloop | TRACELOOP_API_KEY | pip install traceloop-sdk |
| Arize Phoenix | PHOENIX_API_KEY | pip install arize-phoenix |
| OpenLIT | - | pip install openlit |
| Langtrace | LANGTRACE_API_KEY | pip install langtrace-python-sdk |
| LangWatch | LANGWATCH_API_KEY | pip install langwatch |
| Datadog | DD_API_KEY | pip install ddtrace |
| MLflow | MLFLOW_TRACKING_URI | pip install mlflow |
| Opik | OPIK_API_KEY | pip install opik |
| Portkey | PORTKEY_API_KEY | pip install portkey-ai |
| Braintrust | BRAINTRUST_API_KEY | pip install braintrust |
| Maxim | MAXIM_API_KEY | pip install maxim-py |
| Weave | WANDB_API_KEY | pip install weave |
| Neatlogs | NEATLOGS_API_KEY | - |
| LangDB | LANGDB_API_KEY | - |
| Atla | ATLA_API_KEY | pip install atla-insights |
| Patronus | PATRONUS_API_KEY | pip install patronus |
| TrueFoundry | TRUEFOUNDRY_API_KEY | - |
Core Concepts
Traces and Spans
from praisonai_tools.observability import obs
from praisonai_tools.observability.base import SpanKind
obs.init(provider="langfuse")
# Create a trace for a workflow
with obs.trace("my-workflow", session_id="user-123"):
# Create spans for individual operations
with obs.span("llm-call", kind=SpanKind.LLM) as span:
span.model = "gpt-4o-mini"
span.input_tokens = 100
span.output_tokens = 50
# ... your LLM call
with obs.span("tool-call", kind=SpanKind.TOOL) as span:
span.tool_name = "search"
# ... your tool call
Logging LLM Calls
obs.log_llm_call(
model="gpt-4o-mini",
input_messages="What is 2+2?",
output="4",
input_tokens=10,
output_tokens=1,
)
obs.log_tool_call(
tool_name="search",
tool_input={"query": "PraisonAI"},
tool_output={"results": [...]},
)
Decorators
@obs.decorator("my-function", SpanKind.CUSTOM)
def my_function():
return "result"
Multi-Agent Tracing
from praisonai_tools.observability import obs
from praisonai_tools.observability.base import SpanKind
obs.init(provider="langfuse")
with obs.trace("multi-agent-workflow"):
# Agent 1
with obs.span("agent-1", kind=SpanKind.AGENT) as span:
span.attributes["agent_name"] = "Researcher"
# ... agent 1 work
# Agent 2 (child of same trace)
with obs.span("agent-2", kind=SpanKind.AGENT) as span:
span.attributes["agent_name"] = "Writer"
# ... agent 2 work
CLI Commands
# List available providers
praisonai obs list
# Check provider connectivity
praisonai obs doctor
# Initialize a provider
praisonai obs init langfuse
Diagnostics
# Check observability status
print(obs.doctor())
# Output: {'enabled': True, 'provider': 'langfuse', 'connection_status': True, ...}