Skip to main content
PraisonAI provides two primary execution approaches: Autonomy (agent-driven results) and Interactive (human-driven sessions). Understanding the distinction between Iterative (autonomous loop) and Interactive (manual loop) is key to building complex workflows.

At a Glance: Iterative vs. Interactive

While the names sound similar, they represent the two sides of “who drives the loop”:
TermDriverParadigmBest For
Interactive👤 HumanChat / TUIExploratory work, manual guidance
Iterative🧠 AgentAutonomy (mode=“iterative”)Self-correction, multi-step automation
CallerSingle-turn responseQuick actions, direct pipelines

Execution Mode

AutonomyConfig has a mode field that controls execution behavior:
LevelDefault ModeBehavior
"suggest" (default)"caller"Single chat()AutonomyResult
"auto_edit""caller"Single chat() + auto-approve edits
"full_auto""iterative"run_autonomous() loop with completion detection
autonomy=True now defaults to caller mode — one chat() call wrapped in AutonomyResult. No wasteful iteration loop. Use mode="iterative" or level="full_auto" for multi-turn autonomous loops.

Execution Flow Comparison

Caller Mode (default for autonomy=True)

Iterative Mode (default for full_auto)

When mode="iterative", the agent runs a multi-turn loop with safety infrastructure:
Key insight: All three modes use the same inner tool loop. The difference is what wraps it — nothing (interactive), an AutonomyResult wrapper (caller), or a multi-turn retry loop (iterative).

Interactive Mode Flow

In interactive mode, the human drives the loop — each message is a single-turn chat() call:
The Loop Driver Distinction:
  • Interactive Mode: The human provides the prompt, the agent responds once, and the human decides the next prompt.
  • Iterative Mode: The agent provides its own prompts to itself in a loop until it detects the task is complete.
  • Caller Mode: A simplified “one-shot” interaction that returns rich metadata.

Feature Comparison

DimensionCaller Mode (autonomy=True)Iterative Mode (full_auto)Interactive (no autonomy)
Primary DriverApplication Code🧠 Agent (Internal Loop)👤 Human (External Loop)
Turns1 (single chat())Up to max_iterationsSingle turn per interaction
Complexity MgtSimple pipelinesAutomated self-correctionManual refinement
Return typeAutonomyResultAutonomyResultstr
Self-correction✅ Doom loop + recovery❌ (Human corrects)
Session persistenceManualAuto-saves each iterationManual
ObservabilityStandardBuilt-in event emissionStandard
Key insight: Caller mode gives you AutonomyResult metadata (.success, .iterations, .duration_seconds) without the overhead of the iterative loop — ideal for most use cases.

Speed Profile

Init Overhead (one-time)

ComponentCostNotes
AutonomyConfig()~0msSimple dataclass
AutonomyTrigger()~1-2msLazy imports EscalationTrigger
DoomLoopTracker()~1msLazy imports DoomLoopDetector
FileSnapshot~50-200msOnly if track_changes=True
ObservabilityHooks~1msOnly if observe=True
With autonomy=True (defaults), init overhead is < 5ms since track_changes=False and observe=False by default.

Per-Iteration Overhead

OperationCost
Timeout check~0μs
Doom-loop check~0.1ms
Action recording~0.1ms
Completion detection~0.1ms
Session auto-save0-5ms
Total~0.5ms
Per-iteration overhead is ~0.5ms — negligible compared to LLM API calls (typically 1-30 seconds each). The real speed difference is the number of LLM calls, not framework overhead.

End-to-End Comparison

MetricNon-AutonomyAutonomy
Minimum LLM calls11 (if completion detected on first turn)
Maximum LLM calls120 (default max_iterations)
Typical for multi-step task1 (all tools in one turn)1-3 (re-injects if not “done”)
Overhead per call0~0.5ms

Important Behavioral Differences

Return type: AutonomyResult.__str__() returns .output, so print(result) works the same. But code that does if result: or len(result) may break — AutonomyResult is always truthy and has no __len__. Use str(result) or result.output.
Early completion risk: If the LLM says "I'm done searching" mid-task, keyword-based detection might stop early. For multi-step tasks, the tool completion signal is the primary mechanism — the model naturally stops calling tools when done. For explicit control, use structured completion signals:
Agent(autonomy={"completion_promise": "COMPLETED"})
Approval with autonomy=True: Default level is "suggest", which does NOT auto-approve tools. Only level="full_auto" auto-wires AutoApproveBackend.

Two-Layer Tool Architecture

Tools are provisioned at two independent layers. The CLI wrapper assembles tool lists and passes them as tools=[...] to the SDK’s Agent() constructor:

What each layer provides

ACP Tools Performance: ACP tools (acp_edit_file, acp_execute_command) go through a complex orchestration flow and can be slow (174s+ per operation). In autonomy mode (--autonomy full_auto), ACP tools are disabled by default for speed. Use --acp flag to explicitly enable them when needed.

CLI Wrapper (praisonai)

13 tools via get_interactive_tools():
  • ACP (4): create/edit/delete files, execute commands (disabled by default in autonomy)
  • LSP (4): symbols, definitions, references, diagnostics
  • Basic (5): read/write files, list, execute, search
Used by: praisonai tui, praisonai "prompt"

Core SDK (praisonaiagents)

16 tools when autonomy=True via AUTONOMY_PROFILE:
  • file_ops (7): read/write/list/copy/move/delete/info
  • shell (3): execute_command, list_processes, get_system_info
  • web (3): internet_search, search_web, web_crawl
  • code_intelligence (3): ast_grep search/rewrite/scan
Used by: Agent(autonomy=True) in Python

Built-in ToolProfiles

The SDK ships with composable profiles in tools/profiles.py:
ProfileToolsDescription
code_intelligence3ast-grep search, rewrite, scan
file_ops7read, write, list, copy, move, delete, get_file_info
shell3execute_command, list_processes, get_system_info
web3internet_search, search_web, web_crawl
code_exec4execute_code, analyze_code, format_code, lint_code
schedule3schedule_add, schedule_list, schedule_remove
autonomy16Composite: file_ops + shell + web + code_intelligence

Tools by entry point

Entry PointTools AvailableSource
praisonai tui13 (ACP + LSP + Basic)CLI wrapper
praisonai "prompt"13 + autonomy (16)CLI wrapper + SDK
praisonai tracker run30 (expanded set)tracker.py
Agent(autonomy=True) in Python16 (AUTONOMY_PROFILE)Core SDK
Agent() in Python0 (user-provided only)

Design Principles

The current architecture follows a layered separation pattern:
1

Core SDK stays minimal

The praisonaiagents package provides the agent runtime, tool execution, and LLM integration — but does not bundle default tools. This keeps the SDK lightweight and avoids opinionated defaults.
2

CLI wrapper adds batteries

The praisonai wrapper package adds ACP, LSP, file operations, and search tools for CLI users. It assembles toolsets and passes them as tools=[...] to the Agent constructor.
3

Tools are data, not hardcoded

Interactive tools are defined as groups in interactive_tools.py with a TOOL_GROUPS dictionary. New tools are added to a group, and all consumers automatically get them.

Why this is the best approach

The Core SDK has zero dependency on the CLI wrapper. Users embedding praisonaiagents in their own applications bring exactly the tools they need — no surprise defaults, no bloat.
Each entry point assembles its own toolset. praisonai tui loads ACP + LSP + Basic. praisonai tracker run loads a broader set. SDK users pass their own tools. This is intentional — different contexts need different capabilities.
interactive_tools.py with get_interactive_tools() is the canonical provider. Both tui/app.py and main.py call this single function. Adding a new interactive tool means editing one file.

When to Use Each Mode

ModeBest ForExample PromptWhat Happens
No autonomyQuestions, explanations, advice”Explain authentication patterns”Agent answers from knowledge — doesn’t touch files or run tools
Caller (default)Single-shot automation, pipelines”Search for X and save results to file”Agent uses tools in one turn, returns AutonomyResult with metadata
IterativeMulti-step self-correction, build→test→fix”Refactor auth module and verify tests pass”Agent loops until task is complete, with doom loop protection

No Autonomy

Fastest — agent explains but doesn’t act.
agent = Agent(
    instructions="Help with coding"
)
agent.start("Explain this function")

Caller Mode

One turn — agent acts immediately with tools.
agent = Agent(
    instructions="You are a research assistant",
    autonomy=True
)
result = agent.start("Search and save to file")
print(result.success)

Iterative Mode

Multi-turn — agent self-corrects until done.
agent = Agent(
    instructions="Build and test features",
    autonomy={"mode": "iterative"}
)
result = agent.start("Refactor the auth module")
print(result.iterations)

Extending Tools

Combine built-in profiles or register custom ones:
from praisonaiagents.tools.profiles import (
    resolve_profiles, register_profile, ToolProfile
)

# Combine built-in profiles
tools = resolve_profiles("file_ops", "web", "shell")
agent = Agent(tools=tools)

# Register a custom profile (e.g., from CLI wrapper)
register_profile(ToolProfile(
    name="acp",
    tools=["acp_create_file", "acp_edit_file",
           "acp_delete_file", "acp_execute_command"],
    description="Agentic Change Plan tools",
))

# Now use it alongside built-in profiles
tools = resolve_profiles("autonomy", "acp", "lsp")

Adding tools for CLI users

Add tools to interactive_tools.py:
# In praisonai/cli/features/interactive_tools.py
TOOL_GROUPS = {
    "basic": [read_file, write_file, ...],
    "acp": [acp_create_file, ...],
    "lsp": [lsp_list_symbols, ...],
    "my_group": [my_custom_tool],  # Add new group
}

Adding tools for SDK users

Pass tools directly — the SDK is tool-agnostic:
from praisonaiagents import Agent

def my_tool(query: str) -> str:
    """My custom tool."""
    return "result"

agent = Agent(
    instructions="Use tools wisely",
    tools=[my_tool],
    autonomy=True  # Gets 16 AUTONOMY_PROFILE tools + your tools
)

Autonomy Concepts

Configuration, stages, and doom loop detection

Autonomous Loops

Execution loop details and async support

Interactive Tools

ACP, LSP, and basic tool reference

Autonomy Modes

Suggest, auto-edit, and full-auto modes