Learn how to create AI agents that can execute tasks in parallel for improved performance.
A workflow that distributes tasks across multiple LLM calls simultaneously, aggregating results to handle complex or large-scale operations efficiently.
Set your OpenAI API key as an environment variable in your terminal:
Copy
export OPENAI_API_KEY=your_api_key_here
3
Create a file
Create a new file app.py with the basic setup:
Copy
from praisonaiagents import Agent, AgentFlowfrom praisonaiagents import parallel# Create parallel research agentsmarket_researcher = Agent( name="MarketResearcher", role="Market Research Analyst", goal="Research market trends and opportunities", instructions="Analyze market trends. Provide concise market insights.")competitor_researcher = Agent( name="CompetitorResearcher", role="Competitive Intelligence Analyst", goal="Research competitor strategies", instructions="Analyze competitors. Provide key competitive insights.")customer_researcher = Agent( name="CustomerResearcher", role="Customer Research Analyst", goal="Research customer needs and behaviors", instructions="Analyze customer segments. Provide customer insights.")# Create aggregator agentaggregator = Agent( name="Aggregator", role="Research Synthesizer", goal="Synthesize research findings", instructions="Combine all research findings into a comprehensive summary.")# Create workflow with parallel executionworkflow = AgentFlow( steps=[ parallel([market_researcher, competitor_researcher, customer_researcher]), aggregator ])# Run workflow - all researchers work in parallel, then aggregator summarizesresult = workflow.start("Research the AI industry")print(result["output"])
4
Start Workflow
Type this in your terminal to run your workflow:
Copy
python app.py
Requirements
Python 3.10 or higher
OpenAI API key. Generate OpenAI API key here. Use Other models using this guide.
Basic understanding of Python and async programming