14 providers with smart auto-selection
DevDuck auto-detects available credentials and picks the best provider. Or specify exactly what you want. 14 providers supported.
DevDuck checks for credentials in this order:
# Force specific provider
export MODEL_PROVIDER=anthropic
devduck
# With specific model
export MODEL_PROVIDER=bedrock
export STRANDS_MODEL_ID=us.anthropic.claude-sonnet-4-20250514-v1:0
devduck
# Common parameters
export STRANDS_MAX_TOKENS=60000
export STRANDS_TEMPERATURE=1.0
| Provider | Environment Variables |
|---|---|
| Bedrock | AWS_BEARER_TOKEN_BEDROCK or AWS credentials |
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Gemini | GOOGLE_API_KEY or GEMINI_API_KEY |
| Ollama | OLLAMA_HOST (default: http://localhost:11434) |
| MLX | STRANDS_MODEL_ID (default: mlx-community/Qwen3-1.7B-4bit) |
Use different models for different tasks within the same session:
# Use Bedrock for main agent, but OpenAI for a specific task
use_agent(
prompt="Write a haiku about coding",
system_prompt="You are a poet",
model_provider="openai",
model_settings={"model_id": "gpt-4o"}
)
# Use Ollama for local processing
use_agent(
prompt="Summarize this text",
system_prompt="You summarize text concisely",
model_provider="ollama",
model_settings={"model_id": "qwen3:8b"}
)
# Use environment config
use_agent(
prompt="Analyze data",
system_prompt="You are a data analyst",
model_provider="env" # Uses STRANDS_* env vars
)
Ollama model auto-selected based on OS:
| Platform | Default Model | Reason |
|---|---|---|
| macOS | qwen3:1.7b | Optimized for Apple Silicon |
| Linux | qwen3:30b | Larger models for servers |
| Other | qwen3:8b | Balanced default |