Skip to main content
This guide covers integrating Muxx with Google’s Gemini models.

Quick Start

Gateway

import google.generativeai as genai

# Configure to use Muxx gateway
genai.configure(
    api_key="your-google-api-key",
    transport="rest",
    client_options={"api_endpoint": "https://gateway.muxx.dev/v1"}
)

model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content("Hello!")
print(response.text)

SDK

from muxx import Muxx
import google.generativeai as genai

muxx = Muxx()
genai.configure(api_key="your-google-api-key")

model = genai.GenerativeModel("gemini-1.5-pro")

@muxx.trace("gemini-call")
def generate(prompt: str) -> str:
    response = model.generate_content(prompt)
    return response.text

Gemini Models

ModelBest ForCost
gemini-1.5-proComplex tasks, long contextMedium
gemini-1.5-flashFast, simple tasksLow
gemini-proGeneral useMedium

Common Patterns

Chat

@trace("gemini-chat")
def chat(message: str) -> str:
    model = genai.GenerativeModel("gemini-1.5-pro")
    chat = model.start_chat()
    response = chat.send_message(message)
    return response.text

With System Instruction

model = genai.GenerativeModel(
    "gemini-1.5-pro",
    system_instruction="You are a helpful coding assistant."
)

response = model.generate_content("Write a Python function")

Multimodal

import PIL.Image

image = PIL.Image.open("image.jpg")

model = genai.GenerativeModel("gemini-1.5-pro")
response = model.generate_content([
    "Describe this image",
    image
])

print(response.text)

Streaming

response = model.generate_content("Write a story", stream=True)

for chunk in response:
    print(chunk.text, end="")