Degraded performance
Some regions or services are experiencing elevated latency or reduced capacity.
Updated just now (mock)Percentiles: p95 latency
Core services
ArkNet Mainnet (Consensus)
Finality + validator health
99.99%
OperationalRegistry API
Provider discovery + filters
99.95%
OperationalCompute Grid (GPU Nodes)
Scheduling + dispatch
99.92%
OperationalDeveloper Console
UI + token controls
100.00%
OperationalAuth & IAM
Sessions + scopes
100.00%
OperationalRegional status
US East (N. Virginia)
p95: 14msHealthyExcellent
US West (Oregon)
p95: 42msHealthyGood
EU Central (Frankfurt)
p95: 88msHealthyFair
AP Northeast (Tokyo)
p95: 140msConstrainedFair
Live throughput
1,284
Active GPUs
online + schedulable
48.2
Jobs / sec
dispatch rate
1.01s
Avg finality
consensus p50
84,200
Total TFLOPS
estimated peak
Metrics are mocked for SSR. In production, these should stream from
/api/metrics/summary.Past incidents
All incidentsDegraded performance in AP-Northeast
Compute GridRegional Capacity
Oct 22, 2025
resolvedimpact: minor
Capacity constraints increased queue time for H100 scheduling. Additional capacity was bridged from partner providers and routing weights were adjusted.
Registry sync delay
Registry APIValidators
Sep 14, 2025
resolvedimpact: none
Validator nodes experienced a brief delay syncing provider metadata. No compute jobs were interrupted.
Need deeper telemetry?
Integrate provider heartbeats and route metrics to your own dashboards with token-scoped ingestion.