Performance
evlog adds ~7µs of overhead per request — that's 0.007ms, orders of magnitude below any HTTP framework or database call. Performance is tracked on every pull request via CodSpeed.
evlog vs alternatives
All benchmarks run with JSON output to no-op destinations. pino writes to /dev/null (sync), winston writes to a no-op stream, consola uses a no-op reporter, evlog uses silent mode.
Results
| Scenario | evlog | pino | consola | winston |
|---|---|---|---|---|
| Simple string log | 1.02M ops/s | 472.8K | 689.7K | 373.3K |
| Structured (5 fields) | 818.5K ops/s | 283.4K | 476.5K | 131.9K |
| Deep nested log | 854.9K ops/s | 171.3K | 287.5K | 62.2K |
| Burst (100 logs) | 9.0K ops/s | 4.6K | 8.9K | 2.2K |
| Logger creation | 7.60M ops/s | 2.41M | 121.5K | 1.76M |
| Wide event lifecycle | 86.2K ops/s | 88.4K | — | 34.9K |
evlog wins 5 out of 6 head-to-head comparisons. The only scenario where pino edges ahead is the wide event lifecycle — but the difference is within noise (1.03x), and evlog emits 1 correlated event where pino emits 4 separate log lines.
What is the "wide event lifecycle"?
This benchmark simulates a real API request:
const log = createLogger({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
log.set({ user: { id: 'usr_123', plan: 'pro' } })
log.set({ cart: { items: 3, total: 9999 } })
log.set({ payment: { method: 'card', last4: '4242' } })
log.emit({ status: 200 })
const child = pinoLogger.child({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
child.info({ user: { id: 'usr_123', plan: 'pro' } }, 'user context')
child.info({ cart: { items: 3, total: 9999 } }, 'cart context')
child.info({ payment: { method: 'card', last4: '4242' } }, 'payment context')
child.info({ status: 200 }, 'request complete')
Same CPU cost, but evlog gives you everything in one place.
Why is evlog faster?
The numbers above aren't magic — they come from deliberate architectural choices:
In-place mutations, not copies. log.set() writes directly into the context object via a recursive mergeInto function. Other loggers clone objects on every call (object spread, Object.assign). evlog never allocates intermediate objects during context accumulation.
No serialization until drain. Context stays as plain JavaScript objects throughout the request lifecycle. JSON.stringify runs exactly once, at emit time. Traditional loggers serialize on every .info() call — that's 4x serialization for 4 log lines.
Lazy allocation. Timestamps, sampling context, and override objects are only created when actually needed. If tail sampling is disabled (the common case), its context object is never allocated. The Date instance used for ISO timestamps is reused across calls.
One event, not N lines. For a typical request, pino emits 4+ JSON lines that all need serializing, transporting, and indexing. evlog emits one. That's 75% less work for your log drain, fewer bytes on the wire, and one row to query instead of four.
RegExp caching. Glob patterns (used in sampling and route matching) are compiled once and cached. Repeated evaluations hit the cache instead of recompiling.
Real-world overhead
For a typical API request:
| Component | Cost |
|---|---|
| Logger creation | 134ns |
3x set() calls | 361ns |
emit() | 950ns |
| Sampling | 69ns |
| Enricher pipeline | 5.20µs |
| Total | ~6.7µs |
For context, a database query takes 1-50ms, an HTTP call takes 10-500ms. evlog's overhead is invisible.
Bundle size
Every entry point is tree-shakeable. You only pay for what you import.
| Entry | Gzip |
|---|---|
| logger | 3.70 kB |
| utils | 1.41 kB |
| error | 1.21 kB |
| enrichers | 1.92 kB |
| pipeline | 1.35 kB |
| browser | 1.21 kB |
A typical Nuxt setup loads logger + utils — about 5.1 kB gzip. Bundle size is tracked on every PR and compared against the main baseline.
Detailed benchmarks
Logger creation
| Operation | ops/sec | Mean |
|---|---|---|
createLogger() (no context) | 7.28M | 137ns |
createLogger() (shallow context) | 7.47M | 134ns |
createLogger() (nested context) | 6.93M | 144ns |
createRequestLogger() | 7.44M | 134ns |
Context accumulation (log.set())
| Operation | ops/sec | Mean |
|---|---|---|
| Shallow merge (3 fields) | 3.56M | 281ns |
| Shallow merge (10 fields) | 2.10M | 476ns |
| Deep nested merge | 2.91M | 343ns |
| 4 sequential calls | 2.77M | 361ns |
Event emission (log.emit())
| Operation | ops/sec | Mean |
|---|---|---|
| Emit minimal event | 1.05M | 950ns |
| Emit with context | 806.8K | 1.24µs |
| Full lifecycle (create + 3 sets + emit) | 773.2K | 1.29µs |
| Emit with error | 24.1K | 41.47µs |
emit with error is slower because Error.captureStackTrace() is an expensive V8 operation (~40µs). This only triggers when errors are thrown.Payload scaling
| Payload | ops/sec | Mean |
|---|---|---|
| Small (2 fields) | 787.8K | 1.27µs |
| Medium (50 fields) | 265.2K | 3.77µs |
| Large (200 nested fields) | 48.5K | 20.64µs |
Sampling
| Operation | ops/sec | Mean |
|---|---|---|
| Tail sampling (shouldKeep) | 14.5M | 69ns |
| Full emit with head + tail | 1.01M | 988ns |
Enrichers
| Enricher | ops/sec | Mean |
|---|---|---|
| User Agent (Chrome) | 922.1K | 1.08µs |
| Geo (Vercel) | 1.88M | 531ns |
| Request Size | 8.46M | 118ns |
| Trace Context | 3.12M | 321ns |
| All combined | 192.4K | 5.20µs |
Error handling
| Operation | ops/sec | Mean |
|---|---|---|
createError() | 109.5K | 9.14µs |
parseError() | 14.71M | 68ns |
| Round-trip (create + parse) | 109.1K | 9.17µs |
Methodology & trust
Can you trust these numbers?
Every benchmark in this page is open source and reproducible. The benchmark files live in packages/evlog/bench/ — you can read the exact code, run it on your machine, and verify the results.
All libraries are tested under the same conditions:
- Same output mode: JSON to a no-op destination (no disk or network I/O measured)
- Same warmup: each benchmark runs for 500ms after JIT stabilization
- Same tooling: Vitest bench powered by tinybench
- Same machine: when comparing libraries, all benchmarks run in the same process on the same hardware
CI regression tracking
Performance regressions are tracked on every pull request via two systems:
- CodSpeed runs all benchmarks using CPU instruction counting (not wall-clock timing). This eliminates noise from shared CI runners and produces deterministic, reproducible results. Regressions are flagged directly on the PR.
- Bundle size comparison measures all entry points against the
mainbaseline and posts a size delta report as a PR comment.
Run it yourself
cd packages/evlog
bun run bench # all benchmarks
bunx vitest bench bench/comparison/ # vs alternatives only
bun bench/scripts/size.ts # bundle size
Configuration
Complete reference for all evlog configuration options including global logger settings, middleware options, environment context, and framework-specific overrides.
Overview
Send your logs to external services with evlog adapters. Built-in support for popular observability platforms and custom destinations.