Model Providers¶
DevDuck supports 14 model providers with smart auto-detection.
Auto-Detection Priority¶
DevDuck checks for credentials in this order and uses the first available:
| # | Provider | Detection |
|---|---|---|
| 1 | Amazon Bedrock | AWS_BEARER_TOKEN_BEDROCK or AWS STS credentials |
| 2 | Anthropic | ANTHROPIC_API_KEY |
| 3 | OpenAI | OPENAI_API_KEY |
| 4 | GitHub Models | GITHUB_TOKEN or PAT_TOKEN |
| 5 | Google Gemini | GOOGLE_API_KEY or GEMINI_API_KEY |
| 6 | Cohere | COHERE_API_KEY |
| 7 | Writer | WRITER_API_KEY |
| 8 | Mistral | MISTRAL_API_KEY |
| 9 | LiteLLM | LITELLM_API_KEY |
| 10 | LlamaAPI | LLAMAAPI_API_KEY |
| 11 | SageMaker | SAGEMAKER_ENDPOINT_NAME |
| 12 | LlamaCpp | LLAMACPP_MODEL_PATH |
| 13 | MLX | Apple Silicon + strands_mlx installed |
| 14 | Ollama | Fallback (always available) |
Manual Selection¶
# Force specific provider
export MODEL_PROVIDER=anthropic
devduck
# With specific model
export MODEL_PROVIDER=bedrock
export STRANDS_MODEL_ID=us.anthropic.claude-sonnet-4-20250514-v1:0
devduck
# Common parameters
export STRANDS_MAX_TOKENS=60000
export STRANDS_TEMPERATURE=1.0
Provider Configuration¶
Multi-Model with use_agent¶
Use different models for different tasks within the same session:
# Use Anthropic for creative writing
use_agent(
prompt="Write a haiku about artificial intelligence",
system_prompt="You are a minimalist poet.",
model_provider="anthropic"
)
# Use local Ollama for sensitive data
use_agent(
prompt="Summarize this confidential document",
system_prompt="You summarize documents concisely.",
model_provider="ollama",
model_settings={"model_id": "qwen3:8b"}
)
# Use environment configuration
use_agent(
prompt="Analyze this data",
system_prompt="You are a data analyst.",
model_provider="env"
)
→ See Multi-Agent for more patterns.
Ollama Smart Defaults¶
DevDuck picks sensible Ollama models per platform:
| OS | Default Model |
|---|---|
| macOS | qwen3:1.7b |
| Linux | qwen3:30b |
| Windows | qwen3:8b |
Override with STRANDS_MODEL_ID.