Skip to main content

Output Styles

Configure how agents format their responses. Control verbosity, format, tone, and length to match your application’s needs.

Output Presets

The output parameter controls what the agent displays during execution:
from praisonaiagents import Agent

# Clean timestamped status (NEW)
agent = Agent(instructions="...", output="status")

# Full verbose output with panels
agent = Agent(instructions="...", output="verbose")

# Silent mode (default) - no output, just returns result
agent = Agent(instructions="...", output="silent")

Preset Reference

PresetDescriptionUse Case
silentNothing (default)SDK/programmatic use
status▸ AI → thinking/responding + inline toolsSimple CLI output
traceStatus with timestampsProgress monitoring
debugtrace + metrics (no boxes)Developer debugging
verboseTask + Tools + Response panelsInteractive use
streamReal-time token streamingChat interfaces
jsonJSONL eventsScripting/parsing

Status Output Example

agent = Agent(
    instructions="You are helpful",
    tools=[get_weather],
    output="status"  # Clean inline status
)
result = agent.start("What is the weather in Tokyo?")
Output:
▸ AI → thinking...
  │ → get_weather(city="Tokyo") → "sunny in Tokyo"
▸ AI → responding...
──────────────────────────────────────────────────
Final Output:
The weather in Tokyo is sunny.

Trace Output Example (with timestamps)

agent = Agent(
    instructions="You are helpful",
    tools=[get_weather],
    output="trace"  # Full trace with timestamps
)
Output:
[13:47:33.123] ▸ AI → thinking...
[13:47:33.456]   │ → get_weather(city="Tokyo") → "sunny" [0.2s] ✓
[13:47:33.789] ▸ AI → responding...
──────────────────────────────────────────────────
Final Output:
The weather in Tokyo is sunny.

Backward Compatible Aliases

AliasMaps To
plain, minimalsilent
normalverbose
text, actionsstatus

Quick Start

Agent-Centric Usage

from praisonaiagents import Agent
from praisonaiagents.output import OutputStyle

# Agent with concise output style
agent = Agent(
    name="Formatter",
    instructions="You are a helpful assistant.",
    output_style=OutputStyle.concise()  # Minimal, direct responses
)

agent.start("Explain machine learning in simple terms")

# Available styles: concise(), detailed(), technical(), conversational(), structured()

Style Presets

Pre-configured styles for common use cases:
from praisonaiagents.output import OutputStyle

# Available presets
concise = OutputStyle.concise()        # Minimal, direct
detailed = OutputStyle.detailed()      # Verbose, thorough
technical = OutputStyle.technical()    # Developer-focused
conversational = OutputStyle.conversational()  # Friendly tone
structured = OutputStyle.structured()  # Organized with headers
minimal = OutputStyle.minimal()        # Bare minimum

Custom Styles

from praisonaiagents import Agent
from praisonaiagents.output import OutputStyle

style = OutputStyle(
    name="custom",
    verbosity="normal",      # minimal, normal, verbose
    format="markdown",       # markdown, plain, json
    tone="professional",     # professional, friendly, technical
    max_length=1000,         # Character limit
    include_examples=True,
    include_code_blocks=True
)

agent = Agent(
    name="CustomAgent",
    instructions="You are a helpful assistant.",
    output_style=style
)

CLI Usage

praisonai output status        # Show current style
praisonai output set concise   # Set output style

Low-level API Reference

OutputStyle Direct Usage

from praisonaiagents.output import OutputStyle, OutputFormatter

# Use a preset style
style = OutputStyle.concise()
formatter = OutputFormatter(style)

# Format output
text = "# Hello\n\nThis is **bold** text."
plain = formatter.format(text)
print(plain)  # "Hello\n\nThis is bold text."

Format Conversion

from praisonaiagents.output import OutputFormatter

# Markdown to plain text
plain_style = OutputStyle(format="plain")
formatter = OutputFormatter(plain_style)

markdown = "# Title\n\n**Bold** and *italic*"
plain = formatter.format(markdown)
# "Title\n\nBold and italic"

Length Control

style = OutputStyle(max_length=100)
formatter = OutputFormatter(style)

long_text = "This is a sample sentence. " * 20
truncated = formatter.format(long_text)
# Truncated to ~100 characters with "..."

JSON Output

json_style = OutputStyle(format="json")
formatter = OutputFormatter(json_style)

result = formatter.format("Hello, World!")
# {"response": "Hello, World!", "format": "text"}

Utility Functions

formatter = OutputFormatter()

text = "This is a test sentence with several words."

# Count words
words = formatter.get_word_count(text)  # 8

# Count characters
chars = formatter.get_char_count(text)  # 43

# Estimate tokens
tokens = formatter.estimate_tokens(text)  # ~10

System Prompt Integration

Styles can generate system prompt additions:
style = OutputStyle.concise()
prompt_addition = style.get_system_prompt_addition()
# "Be concise and direct. Avoid unnecessary elaboration..."

# Add to agent instructions
full_instructions = f"{base_instructions}\n\n{prompt_addition}"

Serialization

# Save style
style = OutputStyle(name="custom", format="markdown")
data = style.to_dict()

# Restore style
restored = OutputStyle.from_dict(data)

Zero Performance Impact

Output styles use lazy loading:
# Only loads when accessed
from praisonaiagents.output import OutputStyle