Bundle-size, performance, tree-shaking and DX comparison of seven JavaScript utility libraries — lodash, lodash-es, es-toolkit, radashi, remeda, rambda, moderndash.
📊 Read the full report → REPORT.md
| Concern | Winner |
|---|---|
| 📦 Smallest whole-lib (gzip) | moderndash (4.8 KB) → rambda (6.1 KB) → es-toolkit (9.2 KB) |
| 📦 Smallest realistic 5-fn app | moderndash (702 B) → radashi (850 B) |
| ⚡ Most perf wins (12 benches) | moderndash (4) → es-toolkit / radashi (3 each) |
| 🌳 Best tree-shaking | es-toolkit, radashi, remeda, rambda, moderndash (all ✅) |
| 🔤 TypeScript-first | es-toolkit, remeda, radashi, moderndash |
| 🏛️ Drop-in lodash compat | lodash (still) or es-toolkit/compat |
| 📈 Adoption (npm DL/wk) | lodash (149 M) → lodash-es (33 M) → es-toolkit (23 M) |
Full numbers, methodology, and per-function breakdowns in REPORT.md.
- Performance —
mitata1.0 micro-benchmarks across 12 common functions (chunk, groupBy, uniq, cloneDeep, isEqual, pick, omit, difference, intersection, camelCase, get, debounce). - Bundle size — single-function imports + 5-function realistic-app scenario,
minified by
Bun.build, then gzip / brotli / zstd. - Tree-shaking — single-fn-bundle ÷ whole-lib-bundle ratio.
- DX & quality — TypeScript types, ESM/CJS,
exportsmap,sideEffectsflag, dependency count, FP style, public-function count. - Ecosystem — GitHub stars/forks/issues + npm weekly downloads.
- Bun ≥ 1.3
- macOS / Linux / Windows (any platform Bun runs on)
- (Optional)
ghCLI for refreshing ecosystem data
git clone https://github.com/hckhanh/js-utils-benchmark.git
cd js-utils-benchmark
bun install
bun run all # ≈ 5 min — runs everything → REPORT.mdOr piecewise:
bun run quality # → results/quality.json
bun run bundle # → results/bundle.json + bundle-realistic.json
bun run bench # → results/bench.json (slow, ~3 min)
bun run report # → REPORT.mdsrc/
data.ts shared test fixtures
benchmarks/run.ts mitata perf suite (12 fns × 7 libs)
bundle/measure.ts per-fn raw / min / gzip / brotli / zstd
bundle/realistic.ts 5-fn "real app" scenario
quality/score.ts DX matrix from package.json + types walk
report.ts stitches every JSON → REPORT.md
results/ raw outputs (json, committed)
REPORT.md final report
results/ecosystem.json (GitHub stats + npm downloads) is fetched manually:
gh api repos/<owner>/<repo> --jq '{stars:.stargazers_count, forks:.forks_count, openIssues:.open_issues_count, pushed:.pushed_at, lic:.license.spdx_id}'
curl -s https://api.npmjs.org/downloads/point/last-week/<pkg> | jq '.downloads'Then bun run report re-renders.
PRs welcome — especially for:
- Adding a new library to the matrix (just edit the seven
LIBSarrays insrc/{benchmarks,bundle,quality}and re-run). - Adding a missing function to the perf suite.
- Re-running on a different machine and posting your numbers as an issue.
When adding a library:
bun add <new-lib>- Map its function names in
src/bundle/measure.tsandsrc/benchmarks/run.ts(skip with""if a fn isn't available — the runner handles it). - Add an entry to
FP_STYLEinsrc/quality/score.tsand theLIBSorder insrc/report.ts. bun run alland commit the regeneratedREPORT.mdandresults/*.json.
- Numbers are from a single Apple-silicon Mac running Bun 1.3.13. Re-run on your hardware for your bundler/runtime to get representative numbers.
mitatauses JIT-warmup + GC instrumentation and runs 4096 ops per sample.do_not_optimizeprevents dead-code elimination.rambdav11+ is fully auto-curried —equals(a, b)returns a function; benchmarks useequals(a)(b)to actually compute.radashiships bothclone(shallow) andcloneDeep— the cloneDeep benchmark uses the deep one for fairness.
MIT © 2026 Khánh Hoàng