The Muxx Gateway needs access to your LLM provider API keys to forward requests. This guide explains how to configure them securely.Documentation Index
Fetch the complete documentation index at: https://docs.muxx.dev/llms.txt
Use this file to discover all available pages before exploring further.
Adding Provider Keys
- Navigate to your project in the Muxx dashboard
- Go to Settings → Provider Keys
- Add your API keys for each provider you want to use
Supported Providers
OpenAI
Models supported:gpt-4ogpt-4o-minigpt-4-turbogpt-4gpt-3.5-turbo- All embedding models
Anthropic
Models supported:claude-3-5-sonnet-20241022claude-3-5-haiku-20241022claude-3-opus-20240229claude-3-sonnet-20240229claude-3-haiku-20240307
Google (Gemini)
Models supported:gemini-1.5-progemini-1.5-flashgemini-pro
Coming Soon
We’re working on support for:- Mistral - Mistral Large, Medium, Small
- Groq - Fast inference for open models
- Azure OpenAI - Enterprise OpenAI deployment
- AWS Bedrock - Claude and other models on AWS
Provider Selection
The gateway automatically routes requests to the correct provider based on the model name:| Model prefix | Provider |
|---|---|
gpt-* | OpenAI |
claude-* | Anthropic |
gemini-* |