Overhead Performance
TL;DR
- Goal: measure Vovk.ts overhead over native Next.js route handlers (not HTTP stack).
- Routing: O(1) across 1–10,000 controllers (20,000 endpoints). Median latency ~1.1 µs; at 10,000 controllers ~1.3 µs. Throughput ~750k–930k ops/s/core.
- Cold start: O(n). ~5.5 ms at 1,000 controllers; ~63 ms at 10,000. About 8–10× the cost of no‑op decorators.
- Notes: Tinybench on Apple M4 Pro. Next.js runtime cost is out of scope. Compiled from real benchmark output with AI assistance and minor edits. See vovk-perf-test repo for scripts.
Reproducing the Tests
Clone the repo:
git clone https://github.com/finom/vovk-perf-test.git
cd vovk-perf-testInstall dependencies via:
npm iRun performance tests via:
npm run perf-testOverview
Vovk.ts sits on top of Next.js API routes and generates handlers via decorators applied to controller methods:
src/app/api/[[...vovk]]/route.ts
export const { GET, POST } = initSegment({ controllers });We measure framework overhead in two dimensions:
- Request Overhead: per-request routing/handler overhead.
- Cold Start Overhead: initialization time for controllers/metadata.
Source: test scripts in the vovk-perf-test repository.
Request Overhead
Methodology (short)
- Autogenerate N controllers (N ∈ {1, 10, 100, 1,000, 10,000}), each exposing:
- GET without params (reject unexpected id).
- POST with path param “{id}” (require id).
- Minimal handler logic; measure full routing + handler path.
- Tinybench: 100 ms min per test, nanosecond timing; report median latency/throughput.
Results
| Controllers | Endpoints | GET Latency (med) | POST Latency (med) | GET Throughput (med ops/s) | POST Throughput (med ops/s) |
|---|---|---|---|---|---|
| 1 | 2 | 1,083 ns | 1,042 ns | 923,361 | 959,693 |
| 10 | 20 | 1,084 ns | 1,083 ns | 922,509 | 923,361 |
| 100 | 200 | 1,083 ns | 1,125 ns | 923,361 | 888,889 |
| 1,000 | 2,000 | 1,083 ns | 1,125 ns | 923,361 | 888,889 |
| 10,000 | 20,000 | 1,292 ns | 1,333 ns | 773,994 | 750,188 |
Key takeaways:
- O(1) routing: stable latency from 1 to 1,000 controllers; small bump at 10,000.
- Sub‑µs overhead at typical scales; GET≈POST indicates efficient param extraction.
Cold Start Overhead
Methodology (short)
For N ∈ {1, 10, 100, 1,000, 10,000} measure:
- App creation, decorator processing, metadata build, and initSegment().
- Compare to equivalent classes using no‑op decorators to isolate framework work.
Example no‑op decorators:
function noopDecorator() {
return function (..._args: any[]) {};
}
function noopClassDecorator() {
return function <T extends new (...a: any[]) => any>(c: T) { return c; };
}Results
| Controllers | Vovk.ts Init Time (med) | No-op Time (med) | Overhead Ratio | Throughput (ops/s) |
|---|---|---|---|---|
| 1 | 5.7 μs | 0.54 μs | 10.6x | 175,193 |
| 10 | 52.7 μs | 5.1 μs | 10.3x | 18,972 |
| 100 | 520.1 μs | 52.4 μs | 9.9x | 1,923 |
| 1,000 | 5,519.8 μs | 646.1 μs | 8.5x | 181 |
| 10,000 | 62,769.2 μs | 7,629.1 μs | 8.2x | 16 |
Key takeaways:
- O(n) init: linear in controller count with stable per‑controller cost.
- Absolute times are small for long‑lived services; still acceptable for serverless at typical sizes.
Practical guidance
- For high-performance workloads: split the app into multiple segments (e.g., serverless functions built with Next.js route.ts files).
- In theory, with careful segment management and adequate hardware, a single Next.js/Vovk.ts app can host up to ~1,000,000 endpoints. Validate this in your environment; practical limits will be memory, bundle size, cold start budgets, and platform quotas.
Benchmarks: Tinybench on Node.js; hardware Apple M4 Pro. Numbers can vary by runtime, hardware, and build settings. Scripts/results: https://github.com/finom/vovk-perf-test
Last updated on