🤖 Model Providers

14 providers with smart auto-selection

🤖

Universal Model Support

DevDuck auto-detects available credentials and picks the best provider. Or specify exactly what you want. 14 providers supported.

📦 Supported Providers

1
Amazon Bedrock
2
Anthropic
3
OpenAI
4
GitHub Models
5
Google Gemini
6
Cohere
7
Writer
8
Mistral
9
LiteLLM
10
LlamaAPI
11
SageMaker
12
LlamaCpp
13
MLX (Apple)
14
Ollama

🔍 Auto-Detection Priority

DevDuck checks for credentials in this order:

  1. BedrockAWS_BEARER_TOKEN_BEDROCK or AWS STS credentials
  2. AnthropicANTHROPIC_API_KEY
  3. OpenAIOPENAI_API_KEY
  4. GitHubGITHUB_TOKEN or PAT_TOKEN
  5. GeminiGOOGLE_API_KEY or GEMINI_API_KEY
  6. CohereCOHERE_API_KEY
  7. WriterWRITER_API_KEY
  8. MistralMISTRAL_API_KEY
  9. LiteLLMLITELLM_API_KEY
  10. LlamaAPILLAMAAPI_API_KEY
  11. SageMakerSAGEMAKER_ENDPOINT_NAME
  12. LlamaCppLLAMACPP_MODEL_PATH
  13. MLXApple Silicon detected + strands_mlx installed
  14. OllamaFallback (always available)

⚙️ Manual Selection

# Force specific provider
export MODEL_PROVIDER=anthropic
devduck

# With specific model
export MODEL_PROVIDER=bedrock
export STRANDS_MODEL_ID=us.anthropic.claude-sonnet-4-20250514-v1:0
devduck

# Common parameters
export STRANDS_MAX_TOKENS=60000
export STRANDS_TEMPERATURE=1.0

📋 Provider Configuration

ProviderEnvironment Variables
Bedrock AWS_BEARER_TOKEN_BEDROCK or AWS credentials
Anthropic ANTHROPIC_API_KEY
OpenAI OPENAI_API_KEY
Gemini GOOGLE_API_KEY or GEMINI_API_KEY
Ollama OLLAMA_HOST (default: http://localhost:11434)
MLX STRANDS_MODEL_ID (default: mlx-community/Qwen3-1.7B-4bit)

🔀 Multi-Model with use_agent

Use different models for different tasks within the same session:

# Use Bedrock for main agent, but OpenAI for a specific task
use_agent(
    prompt="Write a haiku about coding",
    system_prompt="You are a poet",
    model_provider="openai",
    model_settings={"model_id": "gpt-4o"}
)

# Use Ollama for local processing
use_agent(
    prompt="Summarize this text",
    system_prompt="You summarize text concisely",
    model_provider="ollama",
    model_settings={"model_id": "qwen3:8b"}
)

# Use environment config
use_agent(
    prompt="Analyze data",
    system_prompt="You are a data analyst",
    model_provider="env"  # Uses STRANDS_* env vars
)

🦙 Ollama Smart Defaults

Ollama model auto-selected based on OS:

PlatformDefault ModelReason
macOSqwen3:1.7bOptimized for Apple Silicon
Linuxqwen3:30bLarger models for servers
Otherqwen3:8bBalanced default