Skip to content

Degraded performance

Some regions or services are experiencing elevated latency or reduced capacity.

View incidents
Updated just now (mock)Percentiles: p95 latency

Core services

ArkNet Mainnet (Consensus)
Finality + validator health
Registry API
Provider discovery + filters
Compute Grid (GPU Nodes)
Scheduling + dispatch
Developer Console
UI + token controls
Auth & IAM
Sessions + scopes

Regional status

US East (N. Virginia)
p95: 14msHealthyExcellent
US West (Oregon)
p95: 42msHealthyGood
EU Central (Frankfurt)
p95: 88msHealthyFair
AP Northeast (Tokyo)
p95: 140msConstrainedFair

Live throughput

1,284
Active GPUs
online + schedulable
48.2
Jobs / sec
dispatch rate
1.01s
Avg finality
consensus p50
84,200
Total TFLOPS
estimated peak
Metrics are mocked for SSR. In production, these should stream from /api/metrics/summary.

Past incidents

All incidents

Degraded performance in AP-Northeast

Compute GridRegional Capacity
Oct 22, 2025
resolvedimpact: minor

Capacity constraints increased queue time for H100 scheduling. Additional capacity was bridged from partner providers and routing weights were adjusted.

Registry sync delay

Registry APIValidators
Sep 14, 2025
resolvedimpact: none

Validator nodes experienced a brief delay syncing provider metadata. No compute jobs were interrupted.

Need deeper telemetry?

Integrate provider heartbeats and route metrics to your own dashboards with token-scoped ingestion.