Documentation Index
Fetch the complete documentation index at: https://docs.muxx.dev/llms.txt
Use this file to discover all available pages before exploring further.
This guide covers integrating Muxx with Google’s Gemini models.
Quick Start
Gateway
import google.generativeai as genai
# Configure to use Muxx gateway
genai.configure(
api_key="your-google-api-key",
transport="rest",
client_options={"api_endpoint": "https://gateway.muxx.dev/v1"}
)
model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content("Hello!")
print(response.text)
SDK
from muxx import Muxx
import google.generativeai as genai
muxx = Muxx()
genai.configure(api_key="your-google-api-key")
model = genai.GenerativeModel("gemini-1.5-pro")
@muxx.trace("gemini-call")
def generate(prompt: str) -> str:
response = model.generate_content(prompt)
return response.text
Gemini Models
| Model | Best For | Cost |
|---|
| gemini-1.5-pro | Complex tasks, long context | Medium |
| gemini-1.5-flash | Fast, simple tasks | Low |
| gemini-pro | General use | Medium |
Common Patterns
Chat
@trace("gemini-chat")
def chat(message: str) -> str:
model = genai.GenerativeModel("gemini-1.5-pro")
chat = model.start_chat()
response = chat.send_message(message)
return response.text
With System Instruction
model = genai.GenerativeModel(
"gemini-1.5-pro",
system_instruction="You are a helpful coding assistant."
)
response = model.generate_content("Write a Python function")
Multimodal
import PIL.Image
image = PIL.Image.open("image.jpg")
model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content([
"Describe this image",
image
])
print(response.text)
Streaming
response = model.generate_content("Write a story", stream=True)
for chunk in response:
print(chunk.text, end="")