The Muxx Gateway needs access to your LLM provider API keys to forward requests. This guide explains how to configure them securely.
Adding Provider Keys
- Navigate to your project in the Muxx dashboard
- Go to Settings → Provider Keys
- Add your API keys for each provider you want to use
Provider API keys are encrypted at rest and never logged. They are only used to forward requests to the respective providers.
Supported Providers
OpenAI
Models supported:
gpt-4o
gpt-4o-mini
gpt-4-turbo
gpt-4
gpt-3.5-turbo
- All embedding models
Get your API key from the OpenAI dashboard.
Anthropic
Models supported:
claude-3-5-sonnet-20241022
claude-3-5-haiku-20241022
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
Get your API key from the Anthropic console.
Google (Gemini)
Models supported:
gemini-1.5-pro
gemini-1.5-flash
gemini-pro
Get your API key from the Google AI Studio.
Coming Soon
We’re working on support for:
- Mistral - Mistral Large, Medium, Small
- Groq - Fast inference for open models
- Azure OpenAI - Enterprise OpenAI deployment
- AWS Bedrock - Claude and other models on AWS
Provider Selection
The gateway automatically routes requests to the correct provider based on the model name:
| Model prefix | Provider |
|---|
gpt-* | OpenAI |
claude-* | Anthropic |
gemini-* | Google |
Multiple Providers
You can configure multiple providers in the same project. The gateway will use the appropriate provider based on the model specified in each request.
# This uses OpenAI
client.chat.completions.create(model="gpt-4o", ...)
# This uses Anthropic
client.messages.create(model="claude-3-5-sonnet-20241022", ...)
Both requests go through the same gateway with the same Muxx API key.