Skip to main content

LLM Guardrail CLI Commands

The praisonai-ts CLI provides the guardrail command for content validation.

Basic Usage

# Check content against guardrails
praisonai-ts guardrail check "Your content here"

# Check with custom criteria
praisonai-ts guardrail check "Content to validate" --criteria "Must be professional"

# Get JSON output
praisonai-ts guardrail check "Hello world" --json
Example Output:
{
  "success": true,
  "data": {
    "status": "passed",
    "score": 0.95,
    "message": "Content passes all criteria",
    "reasoning": "The content is appropriate and safe"
  }
}

Status Values

StatusDescription
passedContent meets all criteria
failedContent does not meet criteria
warningContent partially meets criteria

SDK Usage

For programmatic guardrail usage:
import { LLMGuardrail } from 'praisonai';

const guard = new LLMGuardrail({
  name: 'safety',
  criteria: 'Content must be safe and appropriate',
  threshold: 0.8
});

const result = await guard.check('Hello world');
console.log(result.status); // 'passed', 'failed', or 'warning'
console.log(result.score);  // 0-1
console.log(result.reasoning);
For more details, see the LLM Guardrail SDK documentation.