Skip to content

Benchmarks

The IPC benchmarks below use real Tauri IPC (serde + bridge + JS engine). The mock benchmarks section covers unit-test-level measurements without real IPC. Results may vary based on machine, OS, and concurrent load.

Test Environment

ComponentVersion
MachineMacBook Pro 16" (2021)
ChipApple M1 Max
RAM32 GB
OSmacOS Sequoia 15.6.1
Tauri2.10.2
Node.js22.21.1
pnpm8.15.1
TypeScript5.9.3
Vitest4.0.18
Rust edition2021 (MSRV 1.77.2)

Payload Size

All benchmarks use small string payloads ('snap', 'data-2', 'data-3', ...) — typically under 50 bytes. This is intentional: the benchmarks measure coalescing and protocol efficiency, not serialization throughput.

For large payloads (e.g. 100 KB+ JSON), expect higher per-invoke latency due to serde serialization on the Rust side and JSON parsing in JS. The coalescing ratio stays the same — only the absolute times increase.


Real IPC Roundtrip Latency

Full cycle: invoke(update)event deliveryinvoke(getSnapshot)apply.

MetricValue
p502 ms
p953 ms
p995 ms
min1 ms
max8 ms
mean2.34 ms

INFO

These numbers include the complete round-trip through the Rust backend and back to JavaScript. Real-world latency is dominated by the Tauri IPC bridge (~0.5 ms per invoke).


Coalescing Efficiency

How many actual IPC fetches happen when N events fire in rapid succession:

Events firedActual fetchesReduction ratioE2E timeRust emit time
10280%4 ms0.3 ms
50296%5 ms1.2 ms
100298%6 ms2.5 ms
500698.8%22 ms12 ms
10001698.4%59 ms25 ms

3 runs per event count, median reported.

TIP

Up to ~100 events, coalescing reduces IPC calls to exactly 2 — one immediate fetch and one trailing fetch for the latest state. At higher volumes, a few extra fetches occur as new invalidation events arrive during the trailing fetch.


End-to-End Throughput

MetricValue
1000 events → JS applied59 ms
Throughput~17,000 events/sec
Rust emit overhead~25 ms (42%)
JS overhead (coalesce + apply)~34 ms (58%)

Coalescing in Practice

Estimated real-world scenario: a slider firing at 60fps (16.6 ms between events).

Without coalescingWith coalescing
IPC fetches/sec60~2
State applies/sec60~2
Reduction~97%

This is an estimate extrapolated from the coalescing efficiency data above (100 events → 2 fetches). A slider at 60fps fires ~60 events/sec — well within the range where coalescing reduces fetches to 2 per burst. Even with debounce/throttle on top of coalescing, the first update is always immediate — no perceived latency for the user.

You can verify this yourself: run apps/demo with pnpm tauri:dev and use the benchmark panel's slider test.


How Alternatives Compare

No other library in this category publishes IPC-level benchmarks, so direct numbers comparison isn't possible. Here's what we know about their approaches:

LibraryBatching strategyExpected IPC calls for 100 rapid events
state-syncRevision-based coalescing2
@tauri-storeSaveStrategy debounce/throttle1 (delayed)
tauri-plugin-storeDebounce1 (delayed)
zubridgeNone (pass-through)~100 (estimated)
zustand-sync-tabsNone (BroadcastChannel)~100 (no IPC)

Key difference

Debounce-based libraries (like @tauri-store's SaveStrategy) wait for silence before writing — the first event is delayed. Coalescing delivers the first event immediately and batches the rest. See Coalescing vs Debounce for details.


Mock Benchmarks (unit tests)

These run without real IPC, testing the engine logic in isolation.

compareRevisions throughput

Input categoryThroughput
Small strings ('42' vs '17')> 1M ops/sec
Different-length ('99' vs '100')> 1M ops/sec
Equal strings ('1000' vs '1000')> 1M ops/sec
Large u64 strings (18-digit)> 1M ops/sec

Tested with 10K warmup iterations + 1M benchmark iterations per category.

Engine coalescing

EventsSimulated IPC delaysFetchesResult
101ms, 10ms, 50ms≤ 2Pass
501ms, 10ms, 50ms≤ 2Pass
1001ms, 10ms, 50ms≤ 2Pass
5001ms, 10ms, 50ms≤ 2Pass
10001ms, 10ms, 50ms≤ 2Pass

Race condition verification

100 events with rotating IPC delays (1ms, 5ms, 10ms, 30ms, 50ms). Applied revisions are verified to be strictly monotonic — no out-of-order state ever observed.


How to Run

Unit test benchmarks

The -- benchmark flag is a Vitest filename filter — it runs only test files matching "benchmark" in their path.

bash
# Core engine benchmarks (compareRevisions, coalescing, race conditions)
pnpm --filter @statesync/core test -- benchmark

# Tauri transport benchmarks (coalescing with mocked Tauri IPC)
pnpm --filter @statesync/tauri test -- benchmark

Real Tauri E2E

bash
cd apps/demo && pnpm tauri:dev
# Benchmark window opens automatically in dev mode

Disclaimer

  • Numbers depend on machine, OS, and concurrent load
  • Real Tauri IPC adds ~0.5 ms per invoke call
  • Rust emit time scales linearly with event count and number of listeners
  • Coalescing efficiency is deterministic for low event counts (≤100) and slightly variable for higher counts
  • All benchmarks run on a single machine — network latency is not a factor
  • Production workloads with heavier serialization payloads will see higher latency than these synthetic benchmarks
  • Benchmarks use small string payloads (< 50 bytes) — see Payload Size above

See also

Released under the MIT License.