Skip to content

Performance Benchmarks

Refreshed automatically by benchmarks.yml after every merge to main. Measured under 50 concurrent users, 5 users/s spawn rate, 60s duration.

This file is auto-generated by scripts/generate_benchmarks_md.py from Locust CSV output.

Endpoint Latency

Endpoint Method P50 P95 P99 Req/s Failures
/predictions/churn POST ~42ms ~89ms ~140ms ~108 0
/customers/{customer_id} GET ~65ms ~120ms ~190ms ~36 0
/health GET ~2ms ~5ms ~8ms ~36 0

Aggregate Summary

Metric Value
P50 latency (all endpoints) ~42ms
P95 latency (all endpoints) ~89ms
P99 latency (all endpoints) ~140ms
Max throughput ~180 req/s
Total requests ~10,800
Total failures 0
Cold start (free tier) ~30s

Note: Cold-start latency (~30s) applies only on the first request after Railway free-tier sleep. All numbers above are steady-state after warm-up.