OpenClaw Provider Configuration Guide
OpenClaw supports multiple cloud providers for hosting AI models, storage, and services. Providers are configured in the gateway to route requests to AWS Bedrock, Azure OpenAI, GCP Vertex AI, or local models. This guide covers provider setup, credential management, failover configuration, and cost optimization across multi-cloud deployments.
Why This Is Hard to Do Yourself
These are the common pitfalls that trip people up.
Credential management
Each provider has different auth mechanisms: API keys, service accounts, IAM roles. Keeping credentials secure and rotated is complex.
Cost optimization
Different providers charge different rates. Routing requests to the cheapest available provider saves money but requires configuration.
Multi-cloud failover
Setting up automatic failover when a provider is unavailable requires health checks and retry logic.
Model availability
Not all models are available on all providers. Claude 3.5 on AWS Bedrock, GPT-4 on Azure, etc. Mapping is confusing.
Step-by-Step Guide
Understanding OpenClaw providers
What are providers and how do they work?
Configure Anthropic provider
Set up direct Anthropic API access.
Configure AWS Bedrock provider
Set up AWS Bedrock for Claude models with IAM auth.
Warning: AWS Bedrock requires requesting access to models. Go to AWS Console โ Bedrock โ Model access and request Claude models.
Configure Azure OpenAI provider
Set up Azure OpenAI for GPT models.
Configure GCP Vertex AI provider
Set up GCP Vertex AI for Claude and Gemini models.
Configure local Ollama provider
Set up local models for privacy and cost savings.
Configure multi-provider failover
Set up automatic failover between providers.
Provider Configuration Getting Complex?
We set up multi-cloud provider routing with failover, cost optimization, and credential management. Get the right provider mix for your workload and budget.
Get matched with a specialist who can help.
Sign Up for Expert Help โ