Quick Start
Gateway
SDK
Gemini Models
| Model | Best For | Cost |
|---|---|---|
| gemini-1.5-pro | Complex tasks, long context | Medium |
| gemini-1.5-flash | Fast, simple tasks | Low |
| gemini-pro | General use | Medium |
Complete guide to using Muxx with Google Gemini.
import google.generativeai as genai
# Configure to use Muxx gateway
genai.configure(
api_key="your-google-api-key",
transport="rest",
client_options={"api_endpoint": "https://gateway.muxx.dev/v1"}
)
model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content("Hello!")
print(response.text)
from muxx import Muxx
import google.generativeai as genai
muxx = Muxx()
genai.configure(api_key="your-google-api-key")
model = genai.GenerativeModel("gemini-1.5-pro")
@muxx.trace("gemini-call")
def generate(prompt: str) -> str:
response = model.generate_content(prompt)
return response.text
| Model | Best For | Cost |
|---|---|---|
| gemini-1.5-pro | Complex tasks, long context | Medium |
| gemini-1.5-flash | Fast, simple tasks | Low |
| gemini-pro | General use | Medium |
@trace("gemini-chat")
def chat(message: str) -> str:
model = genai.GenerativeModel("gemini-1.5-pro")
chat = model.start_chat()
response = chat.send_message(message)
return response.text
model = genai.GenerativeModel(
"gemini-1.5-pro",
system_instruction="You are a helpful coding assistant."
)
response = model.generate_content("Write a Python function")
import PIL.Image
image = PIL.Image.open("image.jpg")
model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content([
"Describe this image",
image
])
print(response.text)
response = model.generate_content("Write a story", stream=True)
for chunk in response:
print(chunk.text, end="")