Nested agents, multi-model workflows
Spawn isolated sub-agents with different models, system prompts, and tool sets. Perfect for specialized tasks, model comparison, and complex workflows.
┌─────────────────────────────────────────────────────────────────────┐ │ Parent Agent (DevDuck) │ │ Model: Bedrock Claude │ ├─────────────────────────────────────────────────────────────────────┤ │ │ │ User: "Write a poem, then analyze it mathematically" │ │ │ │ ┌───────────────────────────────────────┐ │ │ │ use_agent() │ │ │ │ model_provider: "anthropic" │ │ │ │ system_prompt: "You are a poet" │ │ │ │ tools: ["file_write"] │ │ │ │ │ │ │ │ → Spawns isolated sub-agent │ │ │ │ → Different model provider │ │ │ │ → Limited tool access │ │ │ └───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌───────────────────────────────────────┐ │ │ │ use_agent() │ │ │ │ model_provider: "openai" │ │ │ │ system_prompt: "You are a math expert"│ │ │ │ tools: ["calculator"] │ │ │ └───────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────┘
| Parameter | Type | Description |
|---|---|---|
prompt |
string (required) | The task for the sub-agent |
system_prompt |
string (required) | Custom personality/instructions |
model_provider |
string | bedrock, anthropic, openai, ollama, env, etc. |
model_settings |
object | {"model_id": "...", "params": {...}} |
tools |
array | Tool names to make available (defaults to parent's tools) |
# Use Anthropic for creative writing
use_agent(
prompt="Write a haiku about artificial intelligence",
system_prompt="You are a minimalist poet who writes profound haikus.",
model_provider="anthropic"
)
# Use local Ollama for sensitive data
use_agent(
prompt="Summarize this confidential document",
system_prompt="You summarize documents concisely.",
model_provider="ollama",
model_settings={"model_id": "qwen3:8b"}
)
# Sub-agent with only read access
use_agent(
prompt="Analyze the codebase structure",
system_prompt="You analyze code without making changes.",
tools=["file_read", "shell"] # No file_write, editor
)
# Compare responses from different models
question = "Explain quantum entanglement simply"
bedrock_answer = use_agent(
prompt=question,
system_prompt="You explain complex topics simply.",
model_provider="bedrock"
)
openai_answer = use_agent(
prompt=question,
system_prompt="You explain complex topics simply.",
model_provider="openai"
)
# Compare the responses...
# Use environment variables for model config
import os
os.environ["STRANDS_PROVIDER"] = "litellm"
os.environ["STRANDS_MODEL_ID"] = "openai/gpt-4o"
use_agent(
prompt="Analyze this data",
system_prompt="You are a data analyst.",
model_provider="env" # Uses STRANDS_* env vars
)
Use different models for creative vs analytical tasks.
Limit sub-agent capabilities for safety.
Compare outputs from different providers.
Use cheaper models for simple sub-tasks.
Keep sensitive data on local models.
Switch providers if primary fails.
| Provider | model_provider value |
|---|---|
| Amazon Bedrock | "bedrock" |
| Anthropic | "anthropic" |
| OpenAI | "openai" |
| GitHub Models | "github" |
| Google Gemini | "gemini" |
| Ollama | "ollama" |
| LiteLLM | "litellm" |
| LlamaAPI | "llamaapi" |
| Environment | "env" |
| Parent's Model | None (default) |
For true distributed multi-agent, use Zenoh P2P:
# Terminal 1: DevDuck instance A
zenoh_peer(action="start")
# Terminal 2: DevDuck instance B
zenoh_peer(action="start")
# Terminal 1: Broadcast to all instances
zenoh_peer(
action="broadcast",
message="analyze the codebase and report findings"
)
# Both instances work on the task independently!