Performance
Most Tynd apps are I/O-bound (IPC round-trips, disk, network). The CPU-bound case matters when you hit it. This guide covers where time actually goes, how to measure, and how to fix it.
Typical timings
| Operation | lite | full |
|---|---|---|
| Cold start (blank app) | ~40-80 ms | ~200-500 ms (spawn Bun) |
| Frontend first paint | ~100-200 ms after window creation | same |
| RPC round-trip (JSON, under 1 KB) | ~0.2-1 ms | ~0.5-2 ms |
OS call (fs.readText for a 10 KB file) | ~1-3 ms | same |
Binary IPC (fs.readBinary 10 MB) | ~20-40 ms | same |
| Backend hot reload (dev) | ~100-300 ms | ~200-500 ms |
These are reference points on a mid-range machine (M2, Ryzen 7). Your mileage varies.
Startup
The biggest pain point is cold-start-to-first-pixel.
What happens at launch
- OS loads the binary (~10-50 ms, dominated by AV scan on Windows).
- TYNDPKG trailer read; frontend assets pre-warmed on a background thread.
- Backend bundle loaded:
- lite — QuickJS eval’s
bundle.js(~20-50 ms for typical app). - full — Bun binary extracted (cached across launches) + subprocess spawned (~200-500 ms).
- lite — QuickJS eval’s
- Window created, WebView initialized.
- Frontend entry executes; first RPC can round-trip.
Shrinking startup
- Pick
litewhen you can. 10× faster cold start, smaller binary, smaller RAM footprint. - Keep the backend bundle small. Every 100 KB of backend code is ~5-15 ms of eval time in lite. Audit your deps with
bun build --target=bun backend/main.ts --minifyand check the output size. - Defer OS calls.
app.onReadyfires after the WebView is alive. Don’t block it with synchronous-looking loops — move work into the first user interaction. - Lazy-load frontend routes. Use your framework’s code-splitting (Vite’s dynamic
import()). The TYNDPKG scheme serves assets on-demand — smaller initial chunk = faster first paint. - Pre-warm caches. If your app reads a config file at startup, let Rust cache it (the frontend asset cache is already pre-warmed) or start the read early from a background initializer.
Measuring cold start
const t0 = performance.now();
// ... bootstrap your app ...
console.log("frontend ready in", performance.now() - t0, "ms");For the full end-to-end, launch from a shell wrapper that prints its own time around the binary:
time ./release/my-app --exit-after-ready # if you add a debug flagIPC overhead
A typical JSON RPC round-trip is ~0.5-2 ms. That’s fast for one call but adds up quickly.
Bad:
// 100 RPC calls for one user action
for (const file of files) {
await api.processOne(file);
}Good:
// One RPC call
await api.processAll(files);Batching guidance
- If you catch yourself calling RPC in a tight loop, batch on the backend.
- If the call returns a lot of small objects, consider streaming (
async function*) so the UI can render progressively. - For multi-MB binary, use
tynd-bin://—fs.readBinary/compute.hash. Don’t encode binary as base64 through the JSON channel.
CPU-bound JS
QuickJS (lite) is an interpreter — ~5-30× slower than V8 / JSC on tight JS loops (parsing, crypto, image processing). Three options:
- Move the hot path to Rust.
compute.hashis 10× faster than a JS implementation. Same pattern for hash, compression, parsing — if it’s hot and deterministic, a Rust API is a better answer. - Offload to a worker.
workers.spawn(fn)runs on its own thread. UI stays responsive, but the JS is still interpreted — pure CPU work gains little from just threading. - Switch to
full. Bun’s JSC JIT closes the gap. Costs ~37 MB of binary size; worth it for apps where the JS itself is the bottleneck.
// ❌ lite — 200 ms to parse 50 MB of JSON on mid-range
const data = JSON.parse(text);
// ✅ lite — offload
const data = await workers.spawn((s: string) => JSON.parse(s)).then((w) => w.run(text));
// ✅ full — JIT handles it in ~40 ms, no offload needed
const data = JSON.parse(text);Memory
- Lite baseline: ~30-60 MB at idle.
- Full baseline: ~80-120 MB at idle (Bun subprocess adds ~50 MB).
- WebView memory depends entirely on your frontend — React + state libraries easily reach 100 MB before your data does.
Leaks to watch
- Event listeners not unsubscribed. Every
api.on(…),tyndWindow.onResized(…),singleInstance.onSecondLaunch(…)returns an unsubscribe. Call it. - Unbounded streams. Async generators whose consumers disappeared. Window-close-cancel handles it for secondary windows; for in-app stream handles,
stream.cancel()when done. - Cached query results in
sql. No auto-eviction. If you read 100k rows into memory every query, you’ll feel it.
Measuring
console.log(performance.memory); // available in full (Bun JSC) and most WebViewsFor Rust-side memory (the host), use your OS’s task manager — tynd-full / tynd-lite is visible as a process.
Binary size
| Runtime | Host | Typical app |
|---|---|---|
lite | ~6.4 MB | ~7-12 MB (+frontend + backend + sidecars) |
full | ~6.4 MB + ~37 MB Bun (zstd) | ~44-60 MB |
Shrink the bundle:
- Avoid
moment,lodash— use nativeIntl.DateTimeFormat(lite has a stub; usedate-fns) or per-function imports. - Tree-shake aggressively.
bun build --minify --tree-shakingat bundle time. - Strip source maps for release (
--sourcemap=none). - Sidecars dominate — if you ship a 60 MB ffmpeg, your “~10 MB lite app” becomes ~70 MB. Consider auto-download on first launch instead.
Profiling
Backend
- Full mode —
bun --inspect-brkon the backend spawn command; attach Chrome DevTools. - Lite mode — no inspector yet. Add
performance.now()around suspect sections. Or switch tofulltemporarily for profiling.
Frontend
Your framework’s tools work normally. React DevTools, Vue DevTools — load them as browser extensions into the WebView (debug builds only) via await tyndWindow.openDevTools().
Rust host
You usually don’t need to profile the host unless you’re a Tynd contributor. If you do:
cargo build --release -p tynd-lite --features tracing
# Run with TYND_LOG=debug to see IPC traffic + timingAnti-patterns
awaitin a loop — 10 awaits = 10 ms of latency. UsePromise.allfor independent work.- Round-tripping binary through JSON RPC — kills throughput. Use
fs.readBinary/compute.hash. - Re-parsing JSON every render — cache the parsed object.
- Polling
fs.statinstead offs.watch— watch is cheaper. - Loading the whole SQL table into memory — paginate or stream.
Related
- Binary Data — zero-copy multi-MB IPC.
- Streaming RPC — progressive delivery.
- Runtimes — lite vs full tradeoff.