BUZZ Dedicated LLM Endpoint Inference Dashboard

BUZZ Dedicated LLM Endpoint Inference Dashboard

Monitor real-time throughput, latency, KV cache, and request metrics for your dedicated LLM inference endpoints.

To get started, find your metrics URL in the BUZZ LLM Inference Console under your dedicated endpoint's details, then add it below.