At a Glance: Iterative vs. Interactive
While the names sound similar, they represent the two sides of “who drives the loop”:| Term | Driver | Paradigm | Best For |
|---|---|---|---|
| Interactive | 👤 Human | Chat / TUI | Exploratory work, manual guidance |
| Iterative | 🧠 Agent | Autonomy (mode=“iterative”) | Self-correction, multi-step automation |
| Caller | — | Single-turn response | Quick actions, direct pipelines |
Execution Mode
AutonomyConfig has a mode field that controls execution behavior:
| Level | Default Mode | Behavior |
|---|---|---|
"suggest" (default) | "caller" | Single chat() → AutonomyResult |
"auto_edit" | "caller" | Single chat() + auto-approve edits |
"full_auto" | "iterative" | run_autonomous() loop with completion detection |
autonomy=True now defaults to caller mode — one chat() call wrapped in AutonomyResult. No wasteful iteration loop. Use mode="iterative" or level="full_auto" for multi-turn autonomous loops.Execution Flow Comparison
Caller Mode (default for autonomy=True)
Iterative Mode (default for full_auto)
When mode="iterative", the agent runs a multi-turn loop with safety infrastructure:
Interactive Mode Flow
In interactive mode, the human drives the loop — each message is a single-turnchat() call:
The Loop Driver Distinction:
- Interactive Mode: The human provides the prompt, the agent responds once, and the human decides the next prompt.
- Iterative Mode: The agent provides its own prompts to itself in a loop until it detects the task is complete.
- Caller Mode: A simplified “one-shot” interaction that returns rich metadata.
Feature Comparison
| Dimension | Caller Mode (autonomy=True) | Iterative Mode (full_auto) | Interactive (no autonomy) |
|---|---|---|---|
| Primary Driver | Application Code | 🧠 Agent (Internal Loop) | 👤 Human (External Loop) |
| Turns | 1 (single chat()) | Up to max_iterations | Single turn per interaction |
| Complexity Mgt | Simple pipelines | Automated self-correction | Manual refinement |
| Return type | AutonomyResult | AutonomyResult | str |
| Self-correction | ❌ | ✅ Doom loop + recovery | ❌ (Human corrects) |
| Session persistence | Manual | Auto-saves each iteration | Manual |
| Observability | Standard | Built-in event emission | Standard |
Speed Profile
Init Overhead (one-time)
| Component | Cost | Notes |
|---|---|---|
AutonomyConfig() | ~0ms | Simple dataclass |
AutonomyTrigger() | ~1-2ms | Lazy imports EscalationTrigger |
DoomLoopTracker() | ~1ms | Lazy imports DoomLoopDetector |
FileSnapshot | ~50-200ms | Only if track_changes=True |
ObservabilityHooks | ~1ms | Only if observe=True |
With
autonomy=True (defaults), init overhead is < 5ms since track_changes=False and observe=False by default.Per-Iteration Overhead
| Operation | Cost |
|---|---|
| Timeout check | ~0μs |
| Doom-loop check | ~0.1ms |
| Action recording | ~0.1ms |
| Completion detection | ~0.1ms |
| Session auto-save | 0-5ms |
| Total | ~0.5ms |
End-to-End Comparison
| Metric | Non-Autonomy | Autonomy |
|---|---|---|
| Minimum LLM calls | 1 | 1 (if completion detected on first turn) |
| Maximum LLM calls | 1 | 20 (default max_iterations) |
| Typical for multi-step task | 1 (all tools in one turn) | 1-3 (re-injects if not “done”) |
| Overhead per call | 0 | ~0.5ms |
Important Behavioral Differences
Approval with autonomy=True: Default level is
"suggest", which does NOT auto-approve tools. Only level="full_auto" auto-wires AutoApproveBackend.Two-Layer Tool Architecture
Tools are provisioned at two independent layers. The CLI wrapper assembles tool lists and passes them astools=[...] to the SDK’s Agent() constructor:
What each layer provides
CLI Wrapper (praisonai)
13 tools via
get_interactive_tools():- ACP (4): create/edit/delete files, execute commands (disabled by default in autonomy)
- LSP (4): symbols, definitions, references, diagnostics
- Basic (5): read/write files, list, execute, search
praisonai tui, praisonai "prompt"Core SDK (praisonaiagents)
16 tools when
autonomy=True via AUTONOMY_PROFILE:- file_ops (7): read/write/list/copy/move/delete/info
- shell (3): execute_command, list_processes, get_system_info
- web (3): internet_search, search_web, web_crawl
- code_intelligence (3): ast_grep search/rewrite/scan
Agent(autonomy=True) in PythonBuilt-in ToolProfiles
The SDK ships with composable profiles intools/profiles.py:
| Profile | Tools | Description |
|---|---|---|
code_intelligence | 3 | ast-grep search, rewrite, scan |
file_ops | 7 | read, write, list, copy, move, delete, get_file_info |
shell | 3 | execute_command, list_processes, get_system_info |
web | 3 | internet_search, search_web, web_crawl |
code_exec | 4 | execute_code, analyze_code, format_code, lint_code |
schedule | 3 | schedule_add, schedule_list, schedule_remove |
autonomy | 16 | Composite: file_ops + shell + web + code_intelligence |
Tools by entry point
| Entry Point | Tools Available | Source |
|---|---|---|
praisonai tui | 13 (ACP + LSP + Basic) | CLI wrapper |
praisonai "prompt" | 13 + autonomy (16) | CLI wrapper + SDK |
praisonai tracker run | 30 (expanded set) | tracker.py |
Agent(autonomy=True) in Python | 16 (AUTONOMY_PROFILE) | Core SDK |
Agent() in Python | 0 (user-provided only) | — |
Design Principles
The current architecture follows a layered separation pattern:Core SDK stays minimal
The
praisonaiagents package provides the agent runtime, tool execution, and LLM integration — but does not bundle default tools. This keeps the SDK lightweight and avoids opinionated defaults.CLI wrapper adds batteries
The
praisonai wrapper package adds ACP, LSP, file operations, and search tools for CLI users. It assembles toolsets and passes them as tools=[...] to the Agent constructor.Why this is the best approach
SDK independence
SDK independence
The Core SDK has zero dependency on the CLI wrapper. Users embedding
praisonaiagents in their own applications bring exactly the tools they need — no surprise defaults, no bloat.Composable toolsets
Composable toolsets
Each entry point assembles its own toolset.
praisonai tui loads ACP + LSP + Basic. praisonai tracker run loads a broader set. SDK users pass their own tools. This is intentional — different contexts need different capabilities.Single source of truth
Single source of truth
interactive_tools.py with get_interactive_tools() is the canonical provider. Both tui/app.py and main.py call this single function. Adding a new interactive tool means editing one file.When to Use Each Mode
| Mode | Best For | Example Prompt | What Happens |
|---|---|---|---|
| No autonomy | Questions, explanations, advice | ”Explain authentication patterns” | Agent answers from knowledge — doesn’t touch files or run tools |
| Caller (default) | Single-shot automation, pipelines | ”Search for X and save results to file” | Agent uses tools in one turn, returns AutonomyResult with metadata |
| Iterative | Multi-step self-correction, build→test→fix | ”Refactor auth module and verify tests pass” | Agent loops until task is complete, with doom loop protection |
No Autonomy
Fastest — agent explains but doesn’t act.
Caller Mode
One turn — agent acts immediately with tools.
Iterative Mode
Multi-turn — agent self-corrects until done.
Extending Tools
Using ToolProfiles (recommended)
Combine built-in profiles or register custom ones:Adding tools for CLI users
Add tools tointeractive_tools.py:
Adding tools for SDK users
Pass tools directly — the SDK is tool-agnostic:Related
Autonomy Concepts
Configuration, stages, and doom loop detection
Autonomous Loops
Execution loop details and async support
Interactive Tools
ACP, LSP, and basic tool reference
Autonomy Modes
Suggest, auto-edit, and full-auto modes

