Viewing Logs
Navigate to your project and click Logs in the sidebar. Each log entry shows:- Timestamp - When the request was made
- Model - Which LLM model was used
- Status - Success or error
- Latency - Response time in milliseconds
- Tokens - Input and output token counts
- Cost - Calculated cost
Log Details
Click any log entry to see full details:Request
- Full message history
- System prompt (if any)
- Model parameters (temperature, max_tokens, etc.)
- Custom metadata
Response
- Complete model response
- Tool/function calls (if any)
- Finish reason
Metrics
- Input tokens
- Output tokens
- Total tokens
- Cost breakdown
- Latency
- Time to first token (for streaming)
Trace Context
If using the SDK with tracing:- Trace ID
- Span hierarchy
- Parent/child relationships
Filtering
Use filters to find specific logs:By Status
- All - Show everything
- Success - Only successful requests
- Error - Only failed requests
By Provider
- OpenAI
- Anthropic
By Model
Filter by specific models likegpt-4o, claude-3-5-sonnet, etc.
By Time
- Last hour
- Last 24 hours
- Last 7 days
- Custom range
By Metadata
If you include custom metadata, filter by:Search
PressCmd/Ctrl + K or use the search box to search across:
- Request content
- Response content
- Metadata values
- Error messages
Exporting Logs
Export logs for external analysis:- Apply desired filters
- Click Export
- Choose format (CSV or JSON)
- Download file
Log Retention
| Plan | Retention |
|---|---|
| Free | 7 days |
| Pro | 90 days |
| Enterprise | 1 year |