chat command starts an interactive chat session with an AI agent.
Usage
Arguments
| Argument | Description |
|---|---|
PROMPT | Initial prompt for the chat session |
Options
| Option | Short | Description | Default |
|---|---|---|---|
--model | -m | LLM model to use | gpt-4o-mini |
--verbose | -v | Verbose output | false |
--memory | Enable memory persistence | false | |
--tools | -t | Tools file path | |
--user-id | User ID for memory isolation | ||
--session | -s | Session ID to resume | |
--workspace | -w | Workspace directory | current dir |
--debug | Enable debug logging to ~/.praisonai/async_tui_debug.log | false | |
--safe | Safe mode: require approval for file writes and commands | false | |
--autonomy/--no-autonomy | Enable agent autonomy for complex tasks | true | |
--ui-backend | UI backend: auto, plain, rich, mg | auto | |
--json | Output JSON (forces plain backend) | false | |
--no-color | Disable colors | false | |
--theme | UI theme: default, dark, light, minimal | default | |
--compact | Compact output mode | false |
Examples
Start a chat session
Chat with initial prompt
Chat with specific model
Chat with memory enabled
Resume a previous session
Use plain text output (no colors)
Output as JSON
Use middle-ground UI (enhanced streaming)
UI Backends
The--ui-backend flag controls how output is rendered:
| Backend | Description |
|---|---|
auto | Auto-select best available (default) |
plain | Plain text, no colors, works everywhere |
rich | Rich formatting with colors and panels |
mg | Middle-ground: enhanced streaming with no flicker |
PRAISONAI_UI_SAFE=1 to force plain backend.
Interactive Commands
During a chat session, you can use these commands:| Command | Description |
|---|---|
/help | Show available commands |
/exit, /quit | Exit the chat session |
/clear | Clear conversation history |
/new | Start new conversation |
/session | Show current session info |
/sessions | List all saved sessions |
/continue | Continue most recent session |
/model [name] | Show or change model |
/cost | Show token usage and cost |
/history | Show conversation history |
/export [file] | Export conversation to file |
/import <file> | Import conversation from file |
/status | Show ACP/LSP runtime status |
/auto | Toggle autonomy mode (auto-delegate complex tasks) |
/debug | Toggle debug logging to ~/.praisonai/async_tui_debug.log |
/plan <task> | Create a step-by-step plan for a task |
/handoff <type> <task> | Delegate to specialized agent (code/research/review/docs) |
/compact | Toggle compact output mode |
/multiline | Toggle multiline input mode |
/files | List workspace files for @ mentions |
/queue | Show pending prompts in queue |
Quick Start
Runningpraisonai with no arguments starts interactive mode:
praisonai chat.
Features
The interactive chat mode includes:- ASCII Art Logo - Beautiful PraisonAI branding on startup
- Status Bar - Shows model, session info, and keyboard shortcuts
- Auto-completion - Tab completion for commands and file paths
- Command History - Navigate previous commands with arrow keys
- Markdown Rendering - Rich formatted responses with syntax highlighting
- Streaming Output - Real-time response streaming
See Also
- Interactive TUI - Full TUI interface
- Session - Session management
- Memory - Memory management

