Skip to main content

Agent Observability

PraisonAI TypeScript supports 14+ observability integrations for production monitoring. Track every LLM call, tool execution, and decision your Agents make.

Supported Observability Tools (14+)

ToolDescriptionEnv Variable
LangfuseOpen-source LLM observabilityLANGFUSE_SECRET_KEY
LangSmithLangChain’s observability platformLANGCHAIN_API_KEY
LangWatchLLM monitoring & analyticsLANGWATCH_API_KEY
Arize AXPhoenix observabilityARIZE_API_KEY
AxiomLog analyticsAXIOM_TOKEN
BraintrustAI evaluation platformBRAINTRUST_API_KEY
HeliconeLLM observability proxyHELICONE_API_KEY
LaminarAI observabilityLMNR_PROJECT_API_KEY
MaximAI testing platformMAXIM_API_KEY
PatronusAI evaluationPATRONUS_API_KEY
ScorecardAI testingSCORECARD_API_KEY
SigNozOpenTelemetry-basedSIGNOZ_ACCESS_TOKEN
TraceloopOpenLLMetryTRACELOOP_API_KEY
WeaveWeights & BiasesWANDB_API_KEY
ConsoleBuilt-in console logging(none)
MemoryBuilt-in in-memory storage(none)

Quick Start

import { Agent, createObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

// Create adapter (langfuse, langsmith, console, memory, etc.)
const obs = await createObservabilityAdapter('langfuse');
setObservabilityAdapter(obs);

const agent = new Agent({
  name: 'Support Agent',
  instructions: 'Help users with their questions.'
});

// All Agent actions are now traced
await agent.chat('How do I reset my password?');

// Flush traces to the backend
await obs.flush();

Agent with Tracing

import { Agent, MemoryObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

// Enable observability globally
const obs = new MemoryObservabilityAdapter();
setObservabilityAdapter(obs);

const agent = new Agent({
  name: 'Support Agent',
  instructions: 'Help users with their questions.',
  observability: true  // Enable tracing for this Agent
});

// All Agent actions are now traced
const response = await agent.chat('How do I reset my password?');

// View what happened
const traces = obs.getAllTraces();
for (const trace of traces) {
  console.log(`Trace: ${trace.name}`);
  for (const span of trace.spans) {
    console.log(`  ${span.type}: ${span.name} (${span.duration}ms)`);
  }
}

Multi-Agent Tracing

Track interactions between multiple Agents:
import { Agent, Agents, MemoryObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

const obs = new MemoryObservabilityAdapter();
setObservabilityAdapter(obs);

const researcher = new Agent({
  name: 'Researcher',
  instructions: 'Research topics thoroughly.',
  observability: true
});

const writer = new Agent({
  name: 'Writer',
  instructions: 'Write clear summaries.',
  observability: true
});

const agents = new AgentTeam({
  agents: [researcher, writer],
  tasks: [
    { agent: researcher, description: 'Research: {topic}' },
    { agent: writer, description: 'Summarize the research' }
  ],
  observability: true  // Trace the entire workflow
});

await agents.start({ topic: 'AI trends 2024' });

// Analyze the workflow
const traces = obs.getAllTraces();
console.log(`Total traces: ${traces.length}`);
console.log(`Total LLM calls: ${traces.flatMap(t => t.spans).filter(s => s.type === 'llm').length}`);

Agent Performance Monitoring

Track latency, tokens, and costs:
import { Agent, createTool, MemoryObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

const obs = new MemoryObservabilityAdapter();
setObservabilityAdapter(obs);

const agent = new Agent({
  name: 'Analytics Agent',
  instructions: 'Analyze data and provide insights.',
  observability: true
});

// Run multiple queries
await agent.chat('Analyze sales trends');
await agent.chat('Compare Q1 vs Q2');
await agent.chat('Predict Q3 performance');

// Analyze Agent performance
const traces = obs.getAllTraces();
const llmSpans = traces.flatMap(t => t.spans).filter(s => s.type === 'llm');

const stats = {
  totalCalls: llmSpans.length,
  avgLatency: llmSpans.reduce((sum, s) => sum + s.duration, 0) / llmSpans.length,
  totalTokens: llmSpans.reduce((sum, s) => sum + (s.attributes?.tokens || 0), 0)
};

console.log('Agent Performance:');
console.log(`  LLM Calls: ${stats.totalCalls}`);
console.log(`  Avg Latency: ${stats.avgLatency.toFixed(0)}ms`);
console.log(`  Total Tokens: ${stats.totalTokens}`);

Agent Debugging

Debug Agent decisions and tool calls:
import { Agent, createTool, ConsoleObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

// Console adapter logs everything in real-time
const obs = new ConsoleObservabilityAdapter();
setObservabilityAdapter(obs);

const searchTool = createTool({
  name: 'search',
  description: 'Search for information',
  parameters: {
    type: 'object',
    properties: { query: { type: 'string' } },
    required: ['query']
  },
  execute: async ({ query }) => `Results for: ${query}`
});

const agent = new Agent({
  name: 'Debug Agent',
  instructions: 'Search for information to answer questions.',
  tools: [searchTool],
  observability: true
});

await agent.chat('Find information about TypeScript');

// Console output shows:
// [TRACE START] agent-execution
//   [SPAN START] llm-call (llm)
//   [SPAN END] llm-call - completed (1234ms)
//   [SPAN START] tool-call (tool) - search
//   [SPAN END] tool-call - completed (56ms)
//   [SPAN START] llm-call (llm)
//   [SPAN END] llm-call - completed (987ms)
// [TRACE END] agent-execution - completed

Langfuse Integration

Send Agent traces to Langfuse for production monitoring:
import { Agent, LangfuseObservabilityProvider, setObservabilityAdapter } from 'praisonai';

const langfuse = new LangfuseObservabilityProvider({
  publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
  secretKey: process.env.LANGFUSE_SECRET_KEY!,
  baseUrl: 'https://cloud.langfuse.com'
});

setObservabilityAdapter(langfuse);

const agent = new Agent({
  name: 'Production Agent',
  instructions: 'Handle customer inquiries.',
  observability: true
});

// All traces automatically sent to Langfuse
await agent.chat('I need help with my order');

// View in Langfuse dashboard:
// - Trace timeline
// - Token usage
// - Latency breakdown
// - Cost tracking

Custom Observability for Agents

Build custom monitoring:
import { Agent, ObservabilityAdapter, TraceContext, SpanContext } from 'praisonai';

class DatadogAgentAdapter implements ObservabilityAdapter {
  startTrace(name: string, metadata?: Record<string, any>): TraceContext {
    // Send to Datadog APM
    const traceId = datadogTracer.startSpan(name, { tags: metadata });
    return {
      traceId,
      name,
      startSpan: (spanName, type) => this.createSpan(traceId, spanName, type),
      end: (status) => datadogTracer.finishSpan(traceId, { status })
    };
  }
  
  private createSpan(traceId: string, name: string, type: string): SpanContext {
    const spanId = datadogTracer.startChildSpan(traceId, name, { type });
    return {
      spanId,
      name,
      type,
      setAttributes: (attrs) => datadogTracer.setTags(spanId, attrs),
      addEvent: (event, data) => datadogTracer.addEvent(spanId, event, data),
      end: (status) => datadogTracer.finishSpan(spanId, { status })
    };
  }
}

// Use with Agents
setObservabilityAdapter(new DatadogAgentAdapter());

Agent Audit Logging

Track Agent actions for compliance:
import { Agent, createTool, MemoryObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

const obs = new MemoryObservabilityAdapter();
setObservabilityAdapter(obs);

const sensitiveDataTool = createTool({
  name: 'access_customer_data',
  description: 'Access customer information',
  parameters: {
    type: 'object',
    properties: { customerId: { type: 'string' } },
    required: ['customerId']
  },
  execute: async ({ customerId }) => {
    // Tool execution is automatically traced
    return `Customer data for ${customerId}`;
  }
});

const agent = new Agent({
  name: 'Support Agent',
  instructions: 'Help customers with their accounts.',
  tools: [sensitiveDataTool],
  observability: true
});

await agent.chat('Look up customer C-12345');

// Generate audit log
const auditLog = obs.getAllTraces().flatMap(t => 
  t.spans.filter(s => s.type === 'tool').map(s => ({
    timestamp: s.startTime,
    action: s.name,
    agent: t.metadata?.agentName,
    details: s.attributes
  }))
);

console.log('Audit Log:', JSON.stringify(auditLog, null, 2));

Switching Observability Tools

import { createObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

// Switch between tools easily
const tool = process.env.OBS_TOOL || 'memory';
const obs = await createObservabilityAdapter(tool as any);
setObservabilityAdapter(obs);

// Supported tools: langfuse, langsmith, langwatch, arize, axiom,
// braintrust, helicone, laminar, maxim, patronus, scorecard,
// signoz, traceloop, weave, console, memory, noop

Multi-Agent Attribution

Track agent_id, run_id, and trace_id across multi-agent workflows:
import { Agent, Agents, MemoryObservabilityAdapter, setObservabilityAdapter } from 'praisonai';

const obs = new MemoryObservabilityAdapter();
setObservabilityAdapter(obs);

const researcher = new Agent({
  name: 'Researcher',
  instructions: 'Research topics thoroughly.'
});

const writer = new Agent({
  name: 'Writer', 
  instructions: 'Write clear summaries.'
});

const agents = new AgentTeam({
  agents: [researcher, writer],
  tasks: [
    { agent: researcher, description: 'Research: {topic}' },
    { agent: writer, description: 'Summarize the research' }
  ]
});

await agents.start({ topic: 'AI trends 2024' });

// Traces include agent attribution
const traces = obs.getAllTraces();
for (const trace of traces) {
  console.log(`Agent: ${trace.attribution?.agentId}`);
  console.log(`Run: ${trace.attribution?.runId}`);
}

Environment Variables

# Langfuse
export LANGFUSE_PUBLIC_KEY=pk-...
export LANGFUSE_SECRET_KEY=sk-...
export LANGFUSE_HOST=https://cloud.langfuse.com

# LangSmith
export LANGCHAIN_API_KEY=ls-...
export LANGCHAIN_PROJECT=my-project

# LangWatch
export LANGWATCH_API_KEY=...

# Arize AX (Phoenix)
export ARIZE_API_KEY=...

# Axiom
export AXIOM_TOKEN=...
export AXIOM_DATASET=ai-traces

# Braintrust
export BRAINTRUST_API_KEY=...

# Helicone
export HELICONE_API_KEY=...

# Laminar
export LMNR_PROJECT_API_KEY=...

# Maxim
export MAXIM_API_KEY=...

# Patronus
export PATRONUS_API_KEY=...

# Scorecard
export SCORECARD_API_KEY=...

# SigNoz
export SIGNOZ_ACCESS_TOKEN=...
export SIGNOZ_ENDPOINT=...

# Traceloop
export TRACELOOP_API_KEY=...

# Weave (W&B)
export WANDB_API_KEY=...
export WANDB_PROJECT=my-project

Tool Features Matrix

ToolTracesSpansEventsErrorsMetricsExport
Langfuse
LangSmith
LangWatch
Arize
Axiom
Braintrust
Helicone
Laminar
Maxim
Patronus
Scorecard
SigNoz
Traceloop
Weave
Console
Memory