🔀 Multi-Agent

Nested agents, multi-model workflows

use_agent Multi-Model Tool Isolation
🔀

Agents Within Agents

Spawn isolated sub-agents with different models, system prompts, and tool sets. Perfect for specialized tasks, model comparison, and complex workflows.

🔄 How It Works

┌─────────────────────────────────────────────────────────────────────┐
│                        Parent Agent (DevDuck)                       │
│                        Model: Bedrock Claude                         │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│   User: "Write a poem, then analyze it mathematically"              │
│                                                                     │
│   ┌───────────────────────────────────────┐                         │
│   │  use_agent()                          │                         │
│   │  model_provider: "anthropic"         │                         │
│   │  system_prompt: "You are a poet"      │                         │
│   │  tools: ["file_write"]                │                         │
│   │                                       │                         │
│   │  → Spawns isolated sub-agent         │                         │
│   │  → Different model provider          │                         │
│   │  → Limited tool access               │                         │
│   └───────────────────────────────────────┘                         │
│                          │                                          │
│                          ▼                                          │
│   ┌───────────────────────────────────────┐                         │
│   │  use_agent()                          │                         │
│   │  model_provider: "openai"            │                         │
│   │  system_prompt: "You are a math expert"│                        │
│   │  tools: ["calculator"]                │                         │
│   └───────────────────────────────────────┘                         │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

🛠️ use_agent Tool

ParameterTypeDescription
prompt string (required) The task for the sub-agent
system_prompt string (required) Custom personality/instructions
model_provider string bedrock, anthropic, openai, ollama, env, etc.
model_settings object {"model_id": "...", "params": {...}}
tools array Tool names to make available (defaults to parent's tools)

📝 Examples

Different Model for Creative Task

# Use Anthropic for creative writing
use_agent(
    prompt="Write a haiku about artificial intelligence",
    system_prompt="You are a minimalist poet who writes profound haikus.",
    model_provider="anthropic"
)

Local Model for Privacy

# Use local Ollama for sensitive data
use_agent(
    prompt="Summarize this confidential document",
    system_prompt="You summarize documents concisely.",
    model_provider="ollama",
    model_settings={"model_id": "qwen3:8b"}
)

Limited Tools for Safety

# Sub-agent with only read access
use_agent(
    prompt="Analyze the codebase structure",
    system_prompt="You analyze code without making changes.",
    tools=["file_read", "shell"]  # No file_write, editor
)

Model Comparison

# Compare responses from different models
question = "Explain quantum entanglement simply"

bedrock_answer = use_agent(
    prompt=question,
    system_prompt="You explain complex topics simply.",
    model_provider="bedrock"
)

openai_answer = use_agent(
    prompt=question,
    system_prompt="You explain complex topics simply.",
    model_provider="openai"
)

# Compare the responses...

Environment-Based Model

# Use environment variables for model config
import os
os.environ["STRANDS_PROVIDER"] = "litellm"
os.environ["STRANDS_MODEL_ID"] = "openai/gpt-4o"

use_agent(
    prompt="Analyze this data",
    system_prompt="You are a data analyst.",
    model_provider="env"  # Uses STRANDS_* env vars
)

💡 Use Cases

🎨

Specialized Tasks

Use different models for creative vs analytical tasks.

🔒

Tool Isolation

Limit sub-agent capabilities for safety.

⚖️

Model Comparison

Compare outputs from different providers.

💰

Cost Optimization

Use cheaper models for simple sub-tasks.

🏠

Local Processing

Keep sensitive data on local models.

🔄

Fallback Strategies

Switch providers if primary fails.

🤖 Supported Model Providers

Providermodel_provider value
Amazon Bedrock"bedrock"
Anthropic"anthropic"
OpenAI"openai"
GitHub Models"github"
Google Gemini"gemini"
Ollama"ollama"
LiteLLM"litellm"
LlamaAPI"llamaapi"
Environment"env"
Parent's ModelNone (default)

🌐 Distributed Multi-Agent with Zenoh

For true distributed multi-agent, use Zenoh P2P:

# Terminal 1: DevDuck instance A
zenoh_peer(action="start")

# Terminal 2: DevDuck instance B  
zenoh_peer(action="start")

# Terminal 1: Broadcast to all instances
zenoh_peer(
    action="broadcast",
    message="analyze the codebase and report findings"
)
# Both instances work on the task independently!