Skip to main content

Agent LLM Providers

PraisonAI TypeScript supports 60+ AI providers through AI SDK v6. Switch between providers by changing the llm parameter - all providers use the same unified API.

Supported Providers (60+)

Core Providers

ProviderModel ExamplesModalitiesEnv Variable
OpenAIgpt-4o, gpt-4o-mini, gpt-5Chat, Embeddings, Image, AudioOPENAI_API_KEY
Anthropicclaude-sonnet-4, claude-3-5-sonnetChat, ImageANTHROPIC_API_KEY
Googlegemini-2.0-flash, gemini-1.5-proChat, Embeddings, Image, AudioGOOGLE_API_KEY
Google Vertexgemini-pro, palm-2Chat, Embeddings, ImageGOOGLE_APPLICATION_CREDENTIALS
Azure OpenAIgpt-4, gpt-35-turboChat, Embeddings, ImageAZURE_API_KEY
Amazon Bedrockclaude-3, titanChat, EmbeddingsAWS_ACCESS_KEY_ID

Inference Providers

ProviderModel ExamplesModalitiesEnv Variable
xAIgrok-4, grok-3-fastChat, ImageXAI_API_KEY
Groqllama-3.3-70b, mixtral-8x7bChatGROQ_API_KEY
Fireworksllama-v3, mixtralChat, EmbeddingsFIREWORKS_API_KEY
Together.aillama-3, mistral-7bChat, EmbeddingsTOGETHER_API_KEY
DeepInfrallama-3, mistralChat, EmbeddingsDEEPINFRA_API_KEY
Replicatellama, stable-diffusionChat, ImageREPLICATE_API_TOKEN
Basetencustom modelsChatBASETEN_API_KEY
Hugging FacevariousChat, EmbeddingsHUGGINGFACE_API_KEY

Model Providers

ProviderModel ExamplesModalitiesEnv Variable
Mistralmistral-large, mistral-mediumChat, EmbeddingsMISTRAL_API_KEY
Coherecommand-r, command-r-plusChat, EmbeddingsCOHERE_API_KEY
DeepSeekdeepseek-chat, deepseek-reasonerChatDEEPSEEK_API_KEY
Cerebrasllama3.1-8b, llama3.3-70bChatCEREBRAS_API_KEY
Perplexitypplx-7b, pplx-70bChatPERPLEXITY_API_KEY

Image Generation

ProviderModel ExamplesModalitiesEnv Variable
Falflux, stable-diffusionImageFAL_KEY
Black Forest LabsFLUX.1ImageBFL_API_KEY
Lumadream-machineImage, VideoLUMA_API_KEY

Audio/Speech Providers

ProviderModel ExamplesModalitiesEnv Variable
ElevenLabseleven_multilingual_v2SpeechELEVENLABS_API_KEY
AssemblyAItranscriptionAudioASSEMBLYAI_API_KEY
Deepgramnova-2Audio, SpeechDEEPGRAM_API_KEY
GladiatranscriptionAudioGLADIA_API_KEY
LMNTspeechSpeechLMNT_API_KEY
HumeemotionAudioHUME_API_KEY
Rev.aitranscriptionAudioREVAI_API_KEY

Gateway/Proxy Providers

ProviderDescriptionEnv Variable
AI GatewayUnified gatewayAI_GATEWAY_API_KEY
OpenRouterMulti-provider routingOPENROUTER_API_KEY
PortkeyAI gatewayPORTKEY_API_KEY
HeliconeObservability proxyHELICONE_API_KEY
Cloudflare Workers AIEdge inferenceCLOUDFLARE_API_TOKEN

Local/Self-hosted

ProviderDescriptionEnv Variable
OllamaLocal modelsOLLAMA_BASE_URL
LM StudioLocal inferenceLM_STUDIO_BASE_URL
NVIDIA NIMEnterprise localNVIDIA_API_KEY
OpenAI CompatibleAny OpenAI-compatible APIOPENAI_COMPATIBLE_API_KEY

Regional/Specialized

ProviderDescriptionEnv Variable
Qwen (Alibaba)Chinese LLMDASHSCOPE_API_KEY
Zhipu AIGLM modelsZHIPU_API_KEY
MiniMaxChinese providerMINIMAX_API_KEY
Spark (iFlytek)Chinese providerSPARK_API_KEY
SambaNovaEnterpriseSAMBANOVA_API_KEY

Embedding Specialists

ProviderDescriptionEnv Variable
Voyage AIHigh-quality embeddingsVOYAGE_API_KEY
Jina AIEmbeddings & searchJINA_API_KEY
MixedbreadEmbeddingsMIXEDBREAD_API_KEY

Memory/Agent Providers

ProviderDescriptionEnv Variable
Mem0Memory layerMEM0_API_KEY
LettaAgent memoryLETTA_API_KEY

Enterprise/Cloud

ProviderDescriptionEnv Variable
Azure AIAzure servicesAZURE_API_KEY
SAP AI CoreSAP integrationSAP_AI_CORE_KEY
HerokuHeroku AIHEROKU_API_KEY
Anthropic VertexClaude via VertexGOOGLE_APPLICATION_CREDENTIALS

Agent with Different Models

import { Agent } from 'praisonai';

// OpenAI (default)
const openaiAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'gpt-4o-mini'
});

// Anthropic Claude
const claudeAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'anthropic/claude-3-5-sonnet'
});

// Google Gemini
const geminiAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'google/gemini-2.0-flash'
});

// All work the same way
await openaiAgent.chat('Hello!');
await claudeAgent.chat('Hello!');
await geminiAgent.chat('Hello!');

Multi-Agent with Mixed Providers

Use different models for different Agent roles:
import { Agent, AgentTeam } from 'praisonai';

// Fast model for quick tasks
const triageAgent = new Agent({
  name: 'Triage',
  instructions: 'Quickly categorize incoming requests.',
  llm: 'gpt-4o-mini'  // Fast and cheap
});

// Powerful model for complex reasoning
const analysisAgent = new Agent({
  name: 'Analyst',
  instructions: 'Perform deep analysis of complex problems.',
  llm: 'anthropic/claude-sonnet-4'  // Best reasoning
});

// Creative model for content
const writerAgent = new Agent({
  name: 'Writer',
  instructions: 'Write engaging content.',
  llm: 'gpt-4o'  // Good for creative tasks
});

const agents = new AgentTeam([triageAgent, analysisAgent, writerAgent]);
await agents.start();

Agent Model Selection by Task

import { Agent } from 'praisonai';

function createAgentForTask(taskType: string) {
  const modelMap: Record<string, string> = {
    'quick': 'gpt-4o-mini',
    'reasoning': 'anthropic/claude-sonnet-4',
    'creative': 'gpt-4o',
    'code': 'anthropic/claude-3-5-sonnet',
    'multimodal': 'google/gemini-2.0-flash'
  };

  return new Agent({
    instructions: `You handle ${taskType} tasks.`,
    llm: modelMap[taskType] || 'gpt-4o-mini'
  });
}

const codeAgent = createAgentForTask('code');
await codeAgent.chat('Write a function to sort an array');

Agent with Streaming

import { Agent } from 'praisonai';

const agent = new Agent({
  instructions: 'You tell stories.',
  llm: 'gpt-4o',
  stream: true  // Enable streaming
});

// Response streams to console
await agent.chat('Tell me a short story about a robot');

Environment-Based Model Selection

import { Agent } from 'praisonai';

// Model from environment variable
const agent = new Agent({
  instructions: 'You are helpful.',
  llm: process.env.PRAISONAI_MODEL || 'gpt-4o-mini'
});

// Or use different models per environment
const model = process.env.NODE_ENV === 'production' 
  ? 'gpt-4o'           // Better quality in prod
  : 'gpt-4o-mini';     // Cheaper in dev

const prodAgent = new Agent({
  instructions: 'You are helpful.',
  llm: model
});

Model String Formats

FormatExample
Model onlygpt-4o-mini
Provider/Modelopenai/gpt-4o
Anthropicanthropic/claude-3-5-sonnet
Googlegoogle/gemini-2.0-flash
xAIxai/grok-3
Groqgroq/llama-3.3-70b-versatile
Mistralmistral/mistral-large-latest
DeepSeekdeepseek/deepseek-chat

Provider Aliases

Use short aliases for convenience:
AliasProvider
oaiopenai
claudeanthropic
geminigoogle
grokxai
vertexgoogle-vertex
aws, bedrockamazon-bedrock
togethertogetherai
flux, bflblack-forest-labs
local, ollamaollama
nim, nvidianvidia-nim

OpenAI-Compatible Providers

Use any OpenAI-compatible API:
import { Agent } from 'praisonai';

// Local LM Studio
const lmStudioAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'openai-compatible/local-model',
  llmConfig: {
    baseUrl: 'http://localhost:1234/v1',
    apiKey: 'not-needed'
  }
});

// Custom OpenAI-compatible endpoint
const customAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'openai-compatible/my-model',
  llmConfig: {
    baseUrl: process.env.CUSTOM_API_BASE,
    apiKey: process.env.CUSTOM_API_KEY
  }
});

Local Providers (Ollama, LM Studio)

import { Agent } from 'praisonai';

// Ollama (local)
const ollamaAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'ollama/llama3.2',
  llmConfig: {
    baseUrl: 'http://localhost:11434'
  }
});

// LM Studio
const lmStudioAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'lm-studio/local-model',
  llmConfig: {
    baseUrl: 'http://localhost:1234/v1'
  }
});

// NVIDIA NIM
const nimAgent = new Agent({
  instructions: 'You are helpful.',
  llm: 'nvidia-nim/llama-3.1-8b-instruct'
});

Agent with Custom Provider Config

For advanced use cases:
import { Agent, createProvider } from 'praisonai';

// Create custom provider with options
const customProvider = createProvider('openai/gpt-4o', {
  maxRetries: 3,
  timeout: 60000
});

// Use in Agent (advanced)
const agent = new Agent({
  instructions: 'You are helpful.',
  llm: 'gpt-4o'  // Simple string is usually sufficient
});

await agent.chat('Hello!');

Custom Provider Extension

Register your own provider:
import { registerProvider, BaseProvider } from 'praisonai';

class MyCustomProvider extends BaseProvider {
  async generateText(options) {
    // Your implementation
    return { text: 'response', usage: { totalTokens: 10 } };
  }
}

// Register globally
registerProvider('my-provider', MyCustomProvider);

// Use in Agent
const agent = new Agent({
  instructions: 'You are helpful.',
  llm: 'my-provider/my-model'
});

Environment Variables

# Core providers
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AIza...

# Inference providers
export XAI_API_KEY=xai-...
export GROQ_API_KEY=gsk_...
export TOGETHER_API_KEY=...
export FIREWORKS_API_KEY=...

# Local providers
export OLLAMA_BASE_URL=http://localhost:11434
export LM_STUDIO_BASE_URL=http://localhost:1234/v1

# Set default model
export PRAISONAI_MODEL=openai/gpt-4o-mini