Plurall AI
Sign inTalk to us
Incident · inc_000

Brief inference latency spike (us-east-1)

Operationalstarted APR 15 · 07:12:00resolved APR 15 · 07:16:00
modelsingest
  1. operationalAPR 15 · 07:16:00 · 1d ago
    Auto-scaler restored capacity. Latency is back to baseline.
  2. investigatingAPR 15 · 07:12:00 · 1d ago
    p95 inference latency jumped to 320 ms in us-east-1.
Back to status
Plurall AI