Skip to content

Comments

refactor: decompose WarpGraph monolith into method modules#38

Merged
flyingrobots merged 14 commits intomainfrom
warp-graph-decompose
Feb 17, 2026
Merged

refactor: decompose WarpGraph monolith into method modules#38
flyingrobots merged 14 commits intomainfrom
warp-graph-decompose

Conversation

@flyingrobots
Copy link
Member

@flyingrobots flyingrobots commented Feb 16, 2026

Summary

  • Decomposes WarpGraph.js from 3260 lines to 416 lines (-87%) by extracting methods into 9 focused modules in src/domain/warp/
  • Introduces wireWarpMethods() infrastructure that wires extracted methods onto WarpGraph.prototype at import time, preserving the existing public API surface
  • Adds TypeScript module augmentation (_wiredMethods.d.ts) so tsc --noEmit sees the wired methods
  • Includes API surface snapshot test and prototype wiring invariant tests to prevent regression

Method modules

Module Methods LOC
query.methods.js hasNode, getNodeProps, neighbors, query, etc. ~300
patch.methods.js createPatch, writer, discoverWriters, join, etc. ~520
sync.methods.js getFrontier, syncWith, serve, status, etc. ~530
checkpoint.methods.js createCheckpoint, runGC, syncCoverage, etc. ~380
fork.methods.js fork, createWormhole, etc. ~330
materializeAdvanced.methods.js materializeAt, _resolveCeiling, etc. ~340
subscribe.methods.js subscribe, watch, _notifySubscribers ~280
provenance.methods.js patchesFor, materializeSlice, etc. ~270
materialize.methods.js materialize, _materializeGraph ~190

Test plan

  • All 3661 unit tests pass
  • No-coordination regression suite passes
  • ESLint clean (method files added to relaxed-complexity config)
  • tsc --noEmit passes with 0 errors
  • API surface snapshot test locks the 75-method prototype
  • Prototype wiring invariant test verifies wireWarpMethods behavior

Summary by CodeRabbit

  • New Features
    • Decomposed WarpGraph into focused modules exposing queries, subscriptions/watchers, sync (peer + HTTP), checkpoints, GC, forking/patching, advanced materialization, and provenance-driven slice materialization.
  • Bug Fixes
    • Stability, stack-safety, and sync/status correctness improvements; GC guard refinements.
  • Tests
    • Added API-surface, wiring, and coverage tests; adjusted runtime test imports.
  • Chores
    • Lint scope expanded, TypeScript declaration included, package version bumped, build/test script tweaks.

Add API surface snapshot test, wiring invariant test, _internal.js
barrel, and _wire.js prototype wiring helper. These locks must pass
before and after every extraction phase.
Move hasNode, getNodeProps, getEdgeProps, neighbors, getNodes, getEdges,
getPropertyCount, getStateSnapshot, query, observer, translationCost
to src/domain/warp/query.methods.js. Wire via wireWarpMethods().
…pGraph

Extract 8 method modules from WarpGraph.js (3260 → 416 lines, -87%):

- subscribe.methods.js: subscribe, watch, _notifySubscribers
- provenance.methods.js: patchesFor, materializeSlice, loadPatchBySha, etc.
- fork.methods.js: fork, createWormhole, _isAncestor, etc.
- sync.methods.js: getFrontier, syncWith, serve, status, etc.
- checkpoint.methods.js: createCheckpoint, syncCoverage, runGC, etc.
- patch.methods.js: createPatch, writer, discoverWriters, join, etc.
- materialize.methods.js: materialize, _materializeGraph
- materializeAdvanced.methods.js: materializeAt, _resolveCeiling, etc.

WarpGraph.js retains: constructor, static open(), property getters,
_logTiming(), and the wireWarpMethods() call.

Fixes during wiring: QueryError default→named import in patch.methods,
no-shadow for local 'patch' variables, unused import cleanup.
All 3741 tests pass, lint clean.
- Create _wiredMethods.d.ts with module augmentation so tsc can see
  methods wired onto WarpGraph.prototype at runtime
- Add src/**/*.d.ts to tsconfig.json include
- Remove @Private from _logTiming (wired methods need access)
- Fix MaterializedGraph reference in materializeAdvanced.methods.js
- Add @ts-nocheck to Phase 0 tests (test dynamic prototype behavior)

tsc --noEmit now passes with 0 errors.
Blanket src/**/*.d.ts caused GraphNode.d.ts and visualization/index.d.ts
to shadow their .js counterparts, breaking ESLint project resolution.
@coderabbitai
Copy link

coderabbitai bot commented Feb 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Splits WarpGraph into focused domain modules under src/domain/warp/, adds shared internals and a wiring utility, provides a TypeScript augmentation for wired methods, expands ESLint scope, and implements materialize/patch/fork/sync/query/subscribe/provenance/checkpoint/GC features plus tests and version bumps. (49 words)

Changes

Cohort / File(s) Summary
Wiring & Internals
src/domain/warp/_internal.js, src/domain/warp/_wire.js, src/domain/warp/_wiredMethods.d.ts
Adds shared constants and error re-exports; implements wireWarpMethods() to attach method modules to WarpGraph.prototype with duplicate detection; large TS module-augmentation that declares all runtime-wired instance methods.
Query & Read APIs
src/domain/warp/query.methods.js
Adds read-only materialized-state query surface (hasNode, getNodeProps, getEdgeProps, neighbors, snapshots, getNodes/getEdges, property counts, query builder, observer, translationCost).
Subscription & Notifications
src/domain/warp/subscribe.methods.js
Adds subscribe/watch with replay, pattern filtering, optional polling, and _notifySubscribers with per-subscriber isolation.
Provenance & Patch Loading
src/domain/warp/provenance.methods.js
Adds provenance-driven APIs: patchesFor, materializeSlice, backward-cone computation, patch loaders, and causal sorting.
Patch & Writer APIs
src/domain/warp/patch.methods.js
Implements patch creation/application, Lamport sequencing, writer discovery/lifecycle, writer patch loaders, join, _ensureFreshState, and post-commit hooks.
Materialization Core
src/domain/warp/materialize.methods.js, src/domain/warp/materializeAdvanced.methods.js
Implements full/incremental/ceiling materialization, adjacency construction, caching, materializeAt (time-travel), receipts support, and materialized-state persistence.
Forking & Wormholes
src/domain/warp/fork.methods.js
Adds fork API, createWormhole, ancestry/divergence checks, validation, and fork writer/ref creation with ForkError usage.
Checkpoint, GC & Coverage
src/domain/warp/checkpoint.methods.js
Adds checkpoint creation, coverage sync, patches-since-checkpoint, GC heuristics/run, and GC metrics (renamed metric fields).
Sync & Server
src/domain/warp/sync.methods.js
Adds frontier/status APIs, sync request/response handling, syncWith (direct/HTTP with retries/backoff/auth), and serve() HTTP adapter.
Config, Lint & Build
eslint.config.js, tsconfig.json, jsr.json, package.json
ESLint scope extended to include new warp method files; TS include added for _wiredMethods.d.ts; package/jsr versions bumped to 11.2.1.
Tests & Deno imports
test/unit/..., test/runtime/deno/*
Adds wiring/api-surface unit tests; updates Deno test imports to jsr:@std/assert``; adjusts tests for dot encoding and GC metric renames.
Docs / Changelog / Roadmap
CHANGELOG.md, ROADMAP.md
Adds release entry describing decomposition, roadmap items for .d.ts generation/validation and CI checks.
sequenceDiagram
  participant Client as Client
  participant WG as WarpGraph
  participant Store as Persistence
  participant Remote as RemotePeer
  participant HTTP as HttpServer

  Client->>WG: syncWith(remote, options)
  WG->>Store: getFrontier()
  WG->>WG: createSyncRequest()
  alt remote is direct peer
    WG->>Remote: send sync request
    Remote->>WG: respond with sync response
  else remote is HTTP
    WG->>HTTP: POST /sync (with auth headers)
    HTTP->>WG: processSyncRequest()
    WG->>HTTP: return sync response
  end
  WG->>Store: applySyncResponse() -> persist patches
  WG->>WG: maybe materialize() / update frontier / notify subscribers
  WG->>Client: return sync result/status
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇
I wired the hops from sha to state,
split the burrow, opened gate by gate,
patches bounce and forks take wing,
materialize — the morning sing,
hop, commit, a rabbit’s tiny fête.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main refactoring: decomposing a monolithic WarpGraph into focused method modules.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch warp-graph-decompose

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

Release Preflight

  • package version: 11.1.0
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.1.0, release workflow will publish.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@src/domain/warp/_wiredMethods.d.ts`:
- Around line 15-99: The declaration signatures in WarpGraph are out of sync
with implementations: update translationCost to return Promise<{cost: number;
breakdown: any}>, change _loadPatchesBySha to Promise<Array<{patch: any; sha:
string}>>, expand createWormhole’s return to include fromSha, toSha, writerId,
payload and patchCount (e.g. Promise<{fromSha: string; toSha: string; writerId?:
string; payload?: any; patchCount: number}>), adjust patch’s callback type in
patch(build: ...) to accept (p: PatchBuilderV2) => void | Promise<void>, and
change _setMaterializedState to return Promise<{state: any; stateHash: string;
adjacency: any}> so the declarations (functions: translationCost,
_loadPatchesBySha, createWormhole, patch, _setMaterializedState) match their
implementations.

In `@src/domain/warp/materialize.methods.js`:
- Around line 49-51: The early return for the ceiling path currently bypasses
the try/catch and timing logic; move the ceiling check and the call to
this._materializeWithCeiling(ceiling, !!collectReceipts, t0) inside the existing
try block so the call is wrapped by the same try/catch and the
finally/after-catch timing/logging (this._logTiming('materialize', t0, { error
})) always runs; ensure you preserve the boolean coercion of collectReceipts,
rethrow the error after logging in the catch, and return the result as the main
path does.

In `@src/domain/warp/materializeAdvanced.methods.js`:
- Around line 24-43: The JSDoc expects "in" semantics but _resolveCeiling
currently checks options.ceiling !== undefined; change the conditional in the
_resolveCeiling function to test for the presence of the key (use 'ceiling' in
options) so that an explicit { ceiling: undefined } is treated as an override
and returns options.ceiling, otherwise fall back to this._seekCeiling.

In `@src/domain/warp/patch.methods.js`:
- Around line 396-450: Update the discoverTicks return type to include tickShas:
amend the JSDoc for function discoverTicks so perWriter entries are typed as {
ticks: number[], tipSha: string | null, tickShas: Record<number, string> } and
the overall return as { ticks: number[], maxTick: number, perWriter: Map<string,
{ticks:number[], tipSha:string|null, tickShas: Record<number,string>}> }; also
update the declaration in _wiredMethods.d.ts for the discoverTicks method to
match the same corrected return signature (replace the incorrect perWriter:
Map<string, number> with the Map to the proper object shape containing
tickShas).

In `@src/domain/warp/query.methods.js`:
- Around line 116-144: The runtime neighbors function returns objects shaped {
nodeId, label, direction } but the TypeScript declaration in _wiredMethods.d.ts
(around the currently noted line) declares Array<{ neighborId: string; label:
string }>; update the declaration to match the implementation by changing the
return type to Array<{ nodeId: string; label: string; direction: 'outgoing' |
'incoming' }>, ensuring the exported signature for neighbors aligns with the
runtime shape and the direction literal types match those emitted by neighbors.

In `@src/domain/warp/subscribe.methods.js`:
- Around line 246-271: The loop over this._subscribers using for...of is
vulnerable to mutations (e.g., unsubscribe() that splices the array) which can
cause skipping subscribers; fix by iterating a shallow copy (e.g., snapshot =
[...this._subscribers]) and loop over that snapshot when invoking
subscriber.onChange / handling subscriber.pendingReplay, while preserving
existing error handling for subscriber.onError so callbacks can safely mutate
the original this._subscribers without breaking delivery.
🧹 Nitpick comments (12)
src/domain/warp/_wire.js (1)

32-47: Consider also checking for collisions against existing prototype members.

Duplicate detection covers cross-module conflicts but not overwrites of methods/getters already defined on Class.prototype (e.g., _logTiming, property getters). An accidental export with a matching name would silently replace the original.

♻️ Proposed addition before the module loop
+ const existing = new Set(Object.getOwnPropertyNames(Class.prototype));
+
  for (let i = 0; i < methodModules.length; i++) {
    const mod = methodModules[i];
    for (const [name, fn] of Object.entries(mod)) {
      if (typeof fn !== 'function') {
        continue;
      }

+     if (existing.has(name)) {
+       throw new Error(
+         `wireWarpMethods: method "${name}" already exists on ${Class.name}.prototype — ` +
+         `attempted to overwrite from module index ${i}`
+       );
+     }
+
      if (seen.has(name)) {
test/unit/domain/WarpGraph.apiSurface.test.js (1)

49-56: Minor: accessor pair classification.

If a property has both a getter and a setter, it would be labeled as 'getter' only. Consider handling the combined case for accuracy:

-      summary[name] = {
-        type: isGetter ? 'getter' : (isSetter ? 'setter' : 'method'),
+      summary[name] = {
+        type: (isGetter && isSetter) ? 'accessor' : isGetter ? 'getter' : isSetter ? 'setter' : 'method',

Not a correctness issue for snapshot stability, just a labeling accuracy nit.

src/domain/warp/materialize.methods.js (2)

96-100: Spread of writerPatches into allPatches could blow the stack for large writer histories.

allPatches.push(...writerPatches) passes all elements as arguments to push. If a single writer has a very large patch chain (tens of thousands+), this hits the call-stack argument limit.

♻️ Safer alternative
         for (const writerId of writerIds) {
           const writerPatches = await this._loadWriterPatches(writerId);
-          allPatches.push(...writerPatches);
+          for (const p of writerPatches) {
+            allPatches.push(p);
+          }
         }

Or use Array.prototype.concat with reassignment if the pattern is preferred.


180-186: _materializeGraph appears to redundantly call _setMaterializedState.

materialize() already calls _setMaterializedState(state) on line 129. After that, this._materializedGraph.state === state should be true (same reference), making the if block at line 182-184 a no-op in the normal flow. If this is intentional as a defensive guard, a brief comment would clarify the intent.

src/domain/warp/fork.methods.js (1)

237-255: _isAncestor has no upper bound on chain walk — potential infinite loop on corrupt data.

If the commit graph has a cycle (corrupt repo), this loop will never terminate. While the "linear per-writer chain" assumption is documented, a safety bound (e.g., max iterations) would make this more resilient in production without changing the happy-path semantics.

🛡️ Suggested safety bound
 export async function _isAncestor(ancestorSha, descendantSha) {
   if (!ancestorSha || !descendantSha) {
     return false;
   }
   if (ancestorSha === descendantSha) {
     return true;
   }
 
   let cur = descendantSha;
-  while (cur) {
+  const MAX_WALK = 100_000;
+  let steps = 0;
+  while (cur) {
+    if (++steps > MAX_WALK) {
+      throw new Error(`_isAncestor: exceeded ${MAX_WALK} steps — possible cycle`);
+    }
     const nodeInfo = await this._persistence.getNodeInfo(cur);
     const parent = nodeInfo.parents?.[0] ?? null;
     if (parent === ancestorSha) {
       return true;
     }
     cur = parent;
   }
   return false;
 }
src/domain/warp/materializeAdvanced.methods.js (1)

282-339: materializeAt patchLoader silently stops on non-patch commits mid-chain.

At line 308–310, if a non-patch commit (e.g., checkpoint or anchor) sits in the middle of a writer's chain, the loader stops collecting and returns only the patches above it. This means patches below that non-patch commit in the range (fromSha, toSha] would be silently dropped. If this is intentional (checkpoints act as replay boundaries), a brief comment would help future maintainers.

src/domain/warp/patch.methods.js (1)

295-325: createWriter unconditionally writes to console.warn — noisy in production.

Line 300 uses console.warn regardless of environment. Since the logger warning at line 297 already handles this when a logger is configured, the console.warn could be limited to when no logger is present, or removed in favor of the structured logger-only approach.

Reduce noise in production
   if (this._logger) {
     this._logger.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.');
+  } else {
+    // eslint-disable-next-line no-console
+    console.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.');
   }
-  // eslint-disable-next-line no-console
-  console.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.');
src/domain/warp/checkpoint.methods.js (2)

252-284: _maybeRunGC and runGC use Date.now() instead of this._clock.now() — inconsistent with the rest of the codebase.

Other methods (e.g., createCheckpoint, runGC's own _logTiming) use this._clock.now() for timing, which supports test injection. Lines 259, 270, 312, and 356 all use Date.now() directly for GC timing, making GC time-dependent behavior non-mockable in tests.

Use injected clock consistently
     const inputMetrics = {
       ...metrics,
       patchesSinceCompaction: this._patchesSinceGC,
-      timeSinceCompaction: Date.now() - this._lastGCTime,
+      timeSinceCompaction: this._clock.now() - this._lastGCTime,
     };

And similarly for this._lastGCTime = Date.now()this._lastGCTime = this._clock.now() on lines 270, 312, and 356.


381-395: getGCMetrics spreads raw metrics then adds aliases — resulting object has duplicate fields under different names.

The ...rawMetrics spread includes original fields like nodeLiveDots, edgeLiveDots, totalTombstones, etc. Then lines 389–391 add nodeCount, edgeCount, tombstoneCount as aliases. The returned object contains both the original and aliased names, which may confuse consumers.

Return only the canonical shape
   const rawMetrics = collectGCMetrics(this._cachedState);
   return {
-    ...rawMetrics,
     nodeCount: rawMetrics.nodeLiveDots,
     edgeCount: rawMetrics.edgeLiveDots,
     tombstoneCount: rawMetrics.totalTombstones,
+    tombstoneRatio: rawMetrics.tombstoneRatio,
     patchesSinceCompaction: this._patchesSinceGC,
     lastCompactionTime: this._lastGCTime,
   };
src/domain/warp/sync.methods.js (1)

150-186: status() calls getFrontier() twice — once indirectly via hasFrontierChanged() and once directly.

When _stateDirty is false, line 156 calls hasFrontierChanged() which internally calls getFrontier(), then line 173 calls getFrontier() again. Each call does a full ref discovery, so this doubles the I/O for the common "fresh" case.

Reuse the frontier
 export async function status() {
+  const frontier = await this.getFrontier();
+
   /** `@type` {'fresh' | 'stale' | 'none'} */
   let cachedState;
   if (this._cachedState === null) {
     cachedState = 'none';
-  } else if (this._stateDirty || await this.hasFrontierChanged()) {
+  } else if (this._stateDirty || !this._lastFrontier ||
+    frontier.size !== this._lastFrontier.size ||
+    ![...frontier].every(([w, sha]) => this._lastFrontier.get(w) === sha)) {
     cachedState = 'stale';
   } else {
     cachedState = 'fresh';
   }
 
   // ...
-  const frontier = await this.getFrontier();
   const writers = frontier.size;
src/domain/warp/provenance.methods.js (2)

11-11: Inconsistent QueryError import — direct path vs. barrel.

This file imports QueryError directly from '../errors/QueryError.js' while every other method module imports it from './_internal.js'. If the re-export in _internal.js is ever changed or wrapped, this file would diverge silently.

Align with the barrel pattern
-import QueryError from '../errors/QueryError.js';
+import { QueryError } from './_internal.js';

155-194: _computeBackwardCone assumes _provenanceIndex is non-null but doesn't guard itself.

This function is exported and callable directly. Line 170 casts this._provenanceIndex without a null check. If invoked outside materializeSlice (which does the guard), this would throw a non-descriptive error.

Add a defensive guard
 export async function _computeBackwardCone(nodeId) {
+  if (!this._provenanceIndex) {
+    throw new QueryError('No provenance index. Call materialize() first.', {
+      code: 'E_NO_STATE',
+    });
+  }
   const cone = new Map();
   const visited = new Set();

Comment on lines +396 to +450
export async function discoverTicks() {
const writerIds = await this.discoverWriters();
/** @type {Set<number>} */
const globalTickSet = new Set();
const perWriter = new Map();

for (const writerId of writerIds) {
const writerRef = buildWriterRef(this._graphName, writerId);
const tipSha = await this._persistence.readRef(writerRef);
const writerTicks = [];
/** @type {Record<number, string>} */
const tickShas = {};

if (tipSha) {
let currentSha = tipSha;
let lastLamport = Infinity;

while (currentSha) {
const nodeInfo = await this._persistence.getNodeInfo(currentSha);
const kind = detectMessageKind(nodeInfo.message);
if (kind !== 'patch') {
break;
}

const patchMeta = decodePatchMessage(nodeInfo.message);
globalTickSet.add(patchMeta.lamport);
writerTicks.push(patchMeta.lamport);
tickShas[patchMeta.lamport] = currentSha;

// Check monotonic invariant (walking newest->oldest, lamport should decrease)
if (patchMeta.lamport > lastLamport && this._logger) {
this._logger.warn(`[warp] non-monotonic lamport for writer ${writerId}: ${patchMeta.lamport} > ${lastLamport}`);
}
lastLamport = patchMeta.lamport;

if (nodeInfo.parents && nodeInfo.parents.length > 0) {
currentSha = nodeInfo.parents[0];
} else {
break;
}
}
}

perWriter.set(writerId, {
ticks: writerTicks.reverse(),
tipSha: tipSha || null,
tickShas,
});
}

const ticks = [...globalTickSet].sort((a, b) => a - b);
const maxTick = ticks.length > 0 ? ticks[ticks.length - 1] : 0;

return { ticks, maxTick, perWriter };
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

discoverTicks return type in both JSDoc and .d.ts is incomplete/incorrect.

The function returns tickShas in the perWriter entries (line 439–443), but the JSDoc at line 389 declares perWriter: Map<string, {ticks: number[], tipSha: string|null}> — omitting tickShas. Additionally, _wiredMethods.d.ts line 85 declares it as perWriter: Map<string, number>, which is completely wrong.

Fix JSDoc (line 389) and _wiredMethods.d.ts (line 85)

In patch.methods.js, update the JSDoc:

-*   perWriter: Map<string, {ticks: number[], tipSha: string|null}>
+*   perWriter: Map<string, {ticks: number[], tipSha: string|null, tickShas: Record<number, string>}>

In _wiredMethods.d.ts, update line 85:

-    discoverTicks(): Promise<{ ticks: number[]; maxTick: number; perWriter: Map<string, number> }>;
+    discoverTicks(): Promise<{ ticks: number[]; maxTick: number; perWriter: Map<string, { ticks: number[]; tipSha: string | null; tickShas: Record<number, string> }> }>;
🤖 Prompt for AI Agents
In `@src/domain/warp/patch.methods.js` around lines 396 - 450, Update the
discoverTicks return type to include tickShas: amend the JSDoc for function
discoverTicks so perWriter entries are typed as { ticks: number[], tipSha:
string | null, tickShas: Record<number, string> } and the overall return as {
ticks: number[], maxTick: number, perWriter: Map<string, {ticks:number[],
tipSha:string|null, tickShas: Record<number,string>}> }; also update the
declaration in _wiredMethods.d.ts for the discoverTicks method to match the same
corrected return signature (replace the incorrect perWriter: Map<string, number>
with the Map to the proper object shape containing tickShas).

Comment on lines +116 to +144
export async function neighbors(nodeId, direction = 'both', edgeLabel = undefined) {
await this._ensureFreshState();
const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState);

/** @type {Array<{nodeId: string, label: string, direction: 'outgoing' | 'incoming'}>} */
const result = [];

for (const edgeKey of orsetElements(s.edgeAlive)) {
const { from, to, label } = decodeEdgeKey(edgeKey);

if (edgeLabel !== undefined && label !== edgeLabel) {
continue;
}

if ((direction === 'outgoing' || direction === 'both') && from === nodeId) {
if (orsetContains(s.nodeAlive, to)) {
result.push({ nodeId: to, label, direction: /** @type {const} */ ('outgoing') });
}
}

if ((direction === 'incoming' || direction === 'both') && to === nodeId) {
if (orsetContains(s.nodeAlive, from)) {
result.push({ nodeId: from, label, direction: /** @type {const} */ ('incoming') });
}
}
}

return result;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

neighbors return shape (nodeId + direction) doesn't match the TypeScript declaration.

The implementation returns { nodeId, label, direction } but _wiredMethods.d.ts line 21 declares the return as Array<{ neighborId: string; label: string }> — using neighborId instead of nodeId and omitting the direction field. TypeScript consumers will see the wrong field name.

One of these needs to be updated to match the other. If the implementation is correct:

Fix in _wiredMethods.d.ts (line 21)
-    neighbors(nodeId: string, direction?: string, edgeLabel?: string): Promise<Array<{ neighborId: string; label: string }>>;
+    neighbors(nodeId: string, direction?: 'outgoing' | 'incoming' | 'both', edgeLabel?: string): Promise<Array<{ nodeId: string; label: string; direction: 'outgoing' | 'incoming' }>>;
🤖 Prompt for AI Agents
In `@src/domain/warp/query.methods.js` around lines 116 - 144, The runtime
neighbors function returns objects shaped { nodeId, label, direction } but the
TypeScript declaration in _wiredMethods.d.ts (around the currently noted line)
declares Array<{ neighborId: string; label: string }>; update the declaration to
match the implementation by changing the return type to Array<{ nodeId: string;
label: string; direction: 'outgoing' | 'incoming' }>, ensuring the exported
signature for neighbors aligns with the runtime shape and the direction literal
types match those emitted by neighbors.

@github-actions
Copy link

Release Preflight

  • package version: 11.2.0
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.0, release workflow will publish.

…ng, safety

Actionable issues (6):
- Fix 7 return-type/signature mismatches in _wiredMethods.d.ts (neighbors,
  translationCost, _loadPatchesBySha, createWormhole, patch callback,
  _setMaterializedState, discoverTicks)
- Move ceiling check inside try/catch in materialize() so errors are logged
- Fix _resolveCeiling to use 'ceiling' in options (matches JSDoc semantics)
- Add tickShas to discoverTicks JSDoc perWriter type
- Iterate shallow copy of _subscribers to prevent mutation-during-dispatch
- Fix createWriter to only console.warn when no logger is configured

Nitpick improvements (9):
- Add prototype collision check in wireWarpMethods
- Replace allPatches.push(...writerPatches) with loop to avoid stack overflow
- Use this._clock.now() consistently instead of Date.now() in GC timing
- Remove duplicate fields from getGCMetrics (return canonical shape only)
- Eliminate double getFrontier() call in status()
- Align provenance.methods.js import to use _internal.js barrel
- Add defensive guard in _computeBackwardCone for null provenance index
- Add 100k step safety bound to _isAncestor to prevent infinite loops
- _resolveCeiling: coalesce undefined to null when using 'in' operator
- status(): add JSDoc cast for _lastFrontier to satisfy null narrowing
@github-actions
Copy link

Release Preflight

  • package version: 11.2.0
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.0, release workflow will publish.

The --node-modules-dir flag used in Docker Deno tests disallows
HTTPS URL imports. JSR specifiers work correctly in this mode.
@flyingrobots
Copy link
Member Author

@coderabbitai review please 🥕

Addressed in commits 0d337e0, 8d44542, 5df9eba

Actionable Issues (6/6 resolved)

  1. _wiredMethods.d.ts type mismatches — Fixed 7 return-type/signature mismatches: neighbors, translationCost, _loadPatchesBySha, createWormhole, patch callback, _setMaterializedState, discoverTicks
  2. materialize.methods.js ceiling bypass — Moved ceiling check inside try/catch so _materializeWithCeiling errors are logged via _logTiming
  3. materializeAdvanced.methods.js _resolveCeiling — Changed to 'ceiling' in options to match JSDoc semantics (explicit { ceiling: undefined } now treated as override)
  4. patch.methods.js discoverTicks JSDoc — Added tickShas: Record<number, string> to perWriter type
  5. query.methods.js neighbors return shape.d.ts now matches implementation: { nodeId, label, direction } with literal types
  6. subscribe.methods.js iteration safety — Iterate [...this._subscribers] snapshot to prevent mutation-during-dispatch

Nitpick Improvements (9/12 addressed)

  • _wire.js: Added prototype collision check against existing members
  • materialize.methods.js: Replaced allPatches.push(...writerPatches) with for loop (stack safety)
  • patch.methods.js: createWriter now only console.warn when no logger configured
  • checkpoint.methods.js: Replaced all Date.now() with this._clock.now() in GC timing (4 sites)
  • checkpoint.methods.js: getGCMetrics returns canonical shape only (no duplicate ...rawMetrics spread)
  • sync.methods.js: status() now fetches frontier once and reuses it
  • provenance.methods.js: Aligned import to use _internal.js barrel
  • provenance.methods.js: Added defensive guard in _computeBackwardCone
  • fork.methods.js: Added 100k step safety bound to _isAncestor
  • ⏭️ apiSurface.test.js accessor pair label: Deferred — not affecting correctness, test stability unaffected
  • ⏭️ materialize.methods.js redundant _setMaterializedState call: Intentional defensive guard — added mental note but no code change
  • ⏭️ materializeAdvanced.methods.js patchLoader stops on non-patch: By design — checkpoints act as replay boundaries

Deno Test Fix

  • Replaced https://deno.land/std@0.224.0/assert/mod.ts with jsr:@std/assert across all 7 Deno test files — --node-modules-dir flag disallows HTTPS URL imports

@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

@flyingrobots Impressive follow-through! 🎉 You've systematically addressed all 6 actionable issues and 9 of the 12 nitpicks with clear commit traceability. The deferred items have solid justifications—intentional design choices that don't affect correctness.

Particularly nice to see:

  • Type system alignment (_wiredMethods.d.ts now matches all 7 implementation signatures)
  • Defensive improvements (iteration snapshot, _isAncestor safety bound, _computeBackwardCone guard)
  • Clock consistency (this._clock.now() across all GC timing sites)
  • Deno compatibility fix for --node-modules-dir mode

I'll review the updated changes now.


    /)/)
   (^.^)
   (")_(") Thorough work—reviewing now!
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@github-actions
Copy link

Release Preflight

  • package version: 11.2.0
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.0, release workflow will publish.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (9)
src/domain/warp/fork.methods.js (1)

105-107: Date.now() used for fork name instead of this._clock.now().

All other timing in this file uses this._clock.now() (lines 49, 161, 167), but the fork name generation on line 107 uses Date.now(). If the injected clock is a test double or monotonic clock, this inconsistency could produce surprising fork names in tests.

This is minor since fork names are just identifiers, but worth noting for test determinism.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/fork.methods.js` around lines 105 - 107, The fork name
generation uses Date.now() which is inconsistent with the injected clock used
elsewhere; change the resolvedForkName computation to use this._clock.now()
instead of Date.now() (keep the fallback behaviour using forkName and preserve
the random suffix generation) so that resolvedForkName, forkName and code paths
that reference this._clock.now() remain deterministic in tests.
src/domain/warp/_wiredMethods.d.ts (1)

48-48: _relationToCheckpointHead return type could be narrowed to a string literal union.

The implementation returns 'same' | 'ahead' | 'behind' | 'diverged', but line 48 declares Promise<string>. Narrowing the return type would give consumers better type safety.

✏️ Narrower return type
-    _relationToCheckpointHead(ckHead: string, incomingSha: string): Promise<string>;
+    _relationToCheckpointHead(ckHead: string, incomingSha: string): Promise<'same' | 'ahead' | 'behind' | 'diverged'>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/_wiredMethods.d.ts` at line 48, Update the
_relationToCheckpointHead declaration to reflect the exact string-literal union
returned by its implementation: change the return type from Promise<string> to
Promise<'same' | 'ahead' | 'behind' | 'diverged'> so callers get precise type
safety for the possible results of _relationToCheckpointHead.
src/domain/warp/patch.methods.js (1)

486-493: Misleading comment: propsChanged counts new props too, not just changed ones.

The comment on line 486 says "keys that existed in both but have different values," but the condition !oldReg (line 490) also counts properties that are entirely new from otherState. Consider updating the comment or renaming to propsAddedOrChanged for clarity.

✏️ Proposed comment fix
-  // Count property changes (keys that existed in both but have different values)
-  let propsChanged = 0;
+  // Count properties that are new or changed after merge
+  let propsChanged = 0;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/patch.methods.js` around lines 486 - 493, The comment above
the loop is inaccurate: the loop increments propsChanged for both new properties
and changed values; update the comment to reflect this (e.g., "Count properties
that are new or have changed values") or rename the variable propsChanged to
propsAddedOrChanged and update all references in this function (search for
propsChanged, mergedState.prop, and this._cachedState.prop) so the name matches
the behavior.
src/domain/warp/materialize.methods.js (2)

182-188: _materializeGraph calls _setMaterializedState redundantly on first invocation.

materialize() (line 131) already calls _setMaterializedState(state). When _materializeGraph() is first called and this._materializedGraph is null, line 185 calls it again. This is safe (idempotent) but wastes a hash computation and adjacency rebuild.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/materialize.methods.js` around lines 182 - 188,
_materializeGraph currently calls _setMaterializedState redundantly because
materialize() already sets it; update _materializeGraph so it only calls
_setMaterializedState when there is an existing materialized graph and the
stored state differs. In practice, change the condition around the call to
_setMaterializedState to require this._materializedGraph to be truthy (e.g. if
(this._materializedGraph && this._materializedGraph.state !== state) { await
this._setMaterializedState(state); }) so the initial invocation (when
this._materializedGraph is null) does not recompute hashes/adjacencies. Ensure
you reference the same types used today for the state argument.

152-161: cloneStateV5 is called on every materialize(), even with zero subscribers.

Line 161 always clones the full state to update _lastNotifiedState, regardless of whether any subscribers exist. For large graphs, this is a non-trivial allocation. Consider guarding it:

⚡ Skip clone when no subscribers
-    // Notify subscribers if state changed since last notification
-    // Also handles deferred replay for subscribers added with replay: true before cached state
-    if (this._subscribers.length > 0) {
-      const hasPendingReplay = this._subscribers.some(s => s.pendingReplay);
-      const diff = diffStates(this._lastNotifiedState, state);
-      if (!isEmptyDiff(diff) || hasPendingReplay) {
-        this._notifySubscribers(diff, state);
-      }
+    if (this._subscribers.length > 0) {
+      const hasPendingReplay = this._subscribers.some(s => s.pendingReplay);
+      const diff = diffStates(this._lastNotifiedState, state);
+      if (!isEmptyDiff(diff) || hasPendingReplay) {
+        this._notifySubscribers(diff, state);
+      }
+      // Clone state to prevent eager path mutations from affecting the baseline
+      this._lastNotifiedState = cloneStateV5(state);
+    } else {
+      // Reset baseline so first subscriber gets a correct diff
+      this._lastNotifiedState = null;
     }
-    // Clone state to prevent eager path mutations from affecting the baseline
-    this._lastNotifiedState = cloneStateV5(state);

Note: this requires diffStates to handle null as the before argument (which it already does, per subscribe.methods.js line 61).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/materialize.methods.js` around lines 152 - 161, The code
always calls cloneStateV5(state) and assigns to this._lastNotifiedState on every
materialize(), causing unnecessary allocations when there are no subscribers;
change materialize() to only update this._lastNotifiedState (via cloneStateV5)
when there are subscribers or when you need a non-null baseline (i.e., guard the
clone behind the same subscriber check used for notification:
this._subscribers.length > 0 or hasPendingReplay), relying on diffStates
accepting null as the before argument when no baseline is kept. Ensure you
reference and update the same symbols: materialize, this._subscribers,
hasPendingReplay usage, diffStates, cloneStateV5, and this._lastNotifiedState.
src/domain/warp/checkpoint.methods.js (1)

147-152: _loadLatestCheckpoint silently swallows all checkpoint loading errors.

Line 149's bare catch returns null for any failure, including corruption, permission errors, or I/O failures. This means corrupted checkpoints are silently skipped, falling back to full re-materialization without any diagnostic. Consider logging the error before returning null.

🔍 Proposed: log before swallowing
   try {
     return await loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec });
-  } catch {
+  } catch (err) {
+    if (this._logger) {
+      this._logger.warn('[warp] Failed to load checkpoint, falling back to full materialization', {
+        checkpointSha,
+        error: /** `@type` {Error} */ (err).message,
+      });
+    }
     return null;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/checkpoint.methods.js` around lines 147 - 152, The current
_loadLatestCheckpoint implementation swallows all errors from loadCheckpoint and
returns null; modify the catch to capture the error (e.g. catch (err)) and log
diagnostic information before returning null so corrupted/IO/permission issues
are visible. Locate the call to loadCheckpoint(this._persistence, checkpointSha,
{ codec: this._codec }) inside _loadLatestCheckpoint and replace the bare catch
with one that logs the error (use this._logger.error(...) if a logger exists,
otherwise console.error with a message including checkpointSha and the error)
and then return null.
src/domain/warp/subscribe.methods.js (1)

193-232: Polling timer is not cleared if the watch caller never calls unsubscribe().

If the WarpGraph instance is disposed or the watch is abandoned without calling unsubscribe(), the setInterval keeps running indefinitely, holding a reference to this (the WarpGraph) and periodically calling hasFrontierChanged() + materialize(). This is a resource leak for long-lived processes.

Consider documenting this expectation prominently, or using WeakRef/cleanup pattern so the timer self-cancels when the graph is GC'd.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/subscribe.methods.js` around lines 193 - 232, The polling
interval created in the watch flow (pollIntervalId with setInterval) can outlive
the WarpGraph instance and is never cleared unless unsubscribe() is called,
leaking resources; modify the implementation to ensure the timer is cleared when
the graph is disposed or collected—either add a clear method invocation from the
WarpGraph disposal routine (call clearInterval(pollIntervalId) and set
pollIntervalId = null inside the graph's dispose/destroy method) and/or register
a FinalizationRegistry/WeakRef tied to this to automatically clearInterval when
the WarpGraph instance is GC'd; make sure to keep the existing unsubscribe()
behavior (clearing pollIntervalId and calling subscription.unsubscribe()) and to
reference pollIntervalId, pollInFlight, hasFrontierChanged, materialize, and
unsubscribe in your changes.
src/domain/warp/sync.methods.js (2)

90-103: getFrontier() makes N sequential I/O calls — one per writer.

Each readRef call is awaited sequentially inside the loop. For graphs with many writers, this could be slow. A Promise.all would parallelize the reads.

This is a recommended optimization, not blocking.

⚡ Parallelize ref reads
 export async function getFrontier() {
   const writerIds = await this.discoverWriters();
   const frontier = createFrontier();
 
-  for (const writerId of writerIds) {
-    const writerRef = buildWriterRef(this._graphName, writerId);
-    const tipSha = await this._persistence.readRef(writerRef);
-    if (tipSha) {
-      updateFrontier(frontier, writerId, tipSha);
-    }
-  }
+  await Promise.all(writerIds.map(async (writerId) => {
+    const writerRef = buildWriterRef(this._graphName, writerId);
+    const tipSha = await this._persistence.readRef(writerRef);
+    if (tipSha) {
+      updateFrontier(frontier, writerId, tipSha);
+    }
+  }));
 
   return frontier;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 90 - 103, getFrontier currently
awaits this._persistence.readRef inside a for-loop causing N sequential I/O
calls; change it to parallelize reads by mapping writerIds (from
discoverWriters) to promises that call buildWriterRef(this._graphName, writerId)
and this._persistence.readRef(writerRef), await Promise.all on that array of
promises, then iterate the resulting tipSha values and call
updateFrontier(frontier, writerId, tipSha) for non-empty results; keep
createFrontier() and the final return as-is.

364-461: Date.now() on line 367 vs this._clock.now() on line 309.

The outer timing (t0) uses the injected clock, but attemptStart on line 367 uses Date.now() directly. Similarly, line 458 computes durationMs from Date.now(). If the injected clock is a test stub, attempt-level durations will use the real clock while the overall timing uses the stub, making test assertions unpredictable.

✏️ Use consistent clock source
-    const attemptStart = Date.now();
+    const attemptStart = this._clock.now();
     ...
-    const durationMs = Date.now() - attemptStart;
+    const durationMs = this._clock.now() - attemptStart;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 364 - 461, The attempt-level
timing in executeAttempt uses Date.now() for attemptStart and for computing
durationMs, which mixes real time with the injected test clock used elsewhere
(this._clock.now()); replace Date.now() usages in executeAttempt (attemptStart
and durationMs computation) with this._clock.now() so all timing uses the
injected clock (refer to executeAttempt, attemptStart, durationMs, and
this._clock).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 162-179: In _loadPatchesSince, avoid using the spread operator
when merging patches into allPatches because allPatches.push(...patches) can
exceed the engine's argument limit; replace that spread with an explicit loop
(e.g., for (const p of patches) allPatches.push(p)) or use array concatenation
in a non-variadic way so very large patches arrays from _loadWriterPatches do
not cause a stack/argument overflow.
- Around line 252-284: The _maybeRunGC function computes timeSinceCompaction
using this._clock.now() - this._lastGCTime without guarding for this._lastGCTime
=== 0, which can spuriously trigger GC; update _maybeRunGC so
timeSinceCompaction uses the same guard as maybeRunGC (e.g.,
timeSinceCompaction: this._lastGCTime > 0 ? this._clock.now() - this._lastGCTime
: 0) when building inputMetrics for shouldRunGC (references: _maybeRunGC,
maybeRunGC, this._lastGCTime, timeSinceCompaction, collectGCMetrics,
shouldRunGC).

In `@src/domain/warp/sync.methods.js`:
- Around line 249-276: After applying the response and updating
this._cachedState (result.state) and merging response.patches into
this._lastFrontier in applySyncResponse, clear the dirty flag by setting
this._stateDirty = false before returning; locate applySyncResponse (which calls
applySyncResponseImpl and updates _cachedState, _lastFrontier and
_patchesSinceGC) and add the single assignment this._stateDirty = false
immediately after the frontier-merge block so status() will report "fresh".

---

Duplicate comments:
In `@src/domain/warp/_wiredMethods.d.ts`:
- Around line 15-99: The previously reported signature mismatches have been
resolved—no code changes required; confirm the declarations for translationCost,
_loadPatchesBySha, createWormhole, patch (the build callback) and
_setMaterializedState in the WarpGraph interface now match their implementations
and approve the PR as-is.

In `@src/domain/warp/materialize.methods.js`:
- Around line 48-52: This review is a duplicate approval notice referencing the
ceiling delegation now moved into the try/catch (the ceiling check and call to
this._materializeWithCeiling(ceiling, !!collectReceipts, t0)); remove the
duplicate review/comment to avoid noise and ensure no code changes are made—the
ceiling delegation must remain inside the try block and still call
_materializeWithCeiling as shown.

In `@src/domain/warp/subscribe.methods.js`:
- Around line 245-271: The iteration must use a shallow copy to avoid iterator
corruption when a subscriber unsubscribes during onChange; in _notifySubscribers
ensure you iterate over [...this._subscribers] (not this._subscribers directly)
and preserve the existing logic around pendingReplay, diffStates, and
isEmptyDiff so unsubscribe calls inside subscriber.onChange won't affect the
loop; if any other loop over this._subscribers exists, replace it with a
shallow-copy iteration as well.

---

Nitpick comments:
In `@src/domain/warp/_wiredMethods.d.ts`:
- Line 48: Update the _relationToCheckpointHead declaration to reflect the exact
string-literal union returned by its implementation: change the return type from
Promise<string> to Promise<'same' | 'ahead' | 'behind' | 'diverged'> so callers
get precise type safety for the possible results of _relationToCheckpointHead.

In `@src/domain/warp/checkpoint.methods.js`:
- Around line 147-152: The current _loadLatestCheckpoint implementation swallows
all errors from loadCheckpoint and returns null; modify the catch to capture the
error (e.g. catch (err)) and log diagnostic information before returning null so
corrupted/IO/permission issues are visible. Locate the call to
loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }) inside
_loadLatestCheckpoint and replace the bare catch with one that logs the error
(use this._logger.error(...) if a logger exists, otherwise console.error with a
message including checkpointSha and the error) and then return null.

In `@src/domain/warp/fork.methods.js`:
- Around line 105-107: The fork name generation uses Date.now() which is
inconsistent with the injected clock used elsewhere; change the resolvedForkName
computation to use this._clock.now() instead of Date.now() (keep the fallback
behaviour using forkName and preserve the random suffix generation) so that
resolvedForkName, forkName and code paths that reference this._clock.now()
remain deterministic in tests.

In `@src/domain/warp/materialize.methods.js`:
- Around line 182-188: _materializeGraph currently calls _setMaterializedState
redundantly because materialize() already sets it; update _materializeGraph so
it only calls _setMaterializedState when there is an existing materialized graph
and the stored state differs. In practice, change the condition around the call
to _setMaterializedState to require this._materializedGraph to be truthy (e.g.
if (this._materializedGraph && this._materializedGraph.state !== state) { await
this._setMaterializedState(state); }) so the initial invocation (when
this._materializedGraph is null) does not recompute hashes/adjacencies. Ensure
you reference the same types used today for the state argument.
- Around line 152-161: The code always calls cloneStateV5(state) and assigns to
this._lastNotifiedState on every materialize(), causing unnecessary allocations
when there are no subscribers; change materialize() to only update
this._lastNotifiedState (via cloneStateV5) when there are subscribers or when
you need a non-null baseline (i.e., guard the clone behind the same subscriber
check used for notification: this._subscribers.length > 0 or hasPendingReplay),
relying on diffStates accepting null as the before argument when no baseline is
kept. Ensure you reference and update the same symbols: materialize,
this._subscribers, hasPendingReplay usage, diffStates, cloneStateV5, and
this._lastNotifiedState.

In `@src/domain/warp/patch.methods.js`:
- Around line 486-493: The comment above the loop is inaccurate: the loop
increments propsChanged for both new properties and changed values; update the
comment to reflect this (e.g., "Count properties that are new or have changed
values") or rename the variable propsChanged to propsAddedOrChanged and update
all references in this function (search for propsChanged, mergedState.prop, and
this._cachedState.prop) so the name matches the behavior.

In `@src/domain/warp/subscribe.methods.js`:
- Around line 193-232: The polling interval created in the watch flow
(pollIntervalId with setInterval) can outlive the WarpGraph instance and is
never cleared unless unsubscribe() is called, leaking resources; modify the
implementation to ensure the timer is cleared when the graph is disposed or
collected—either add a clear method invocation from the WarpGraph disposal
routine (call clearInterval(pollIntervalId) and set pollIntervalId = null inside
the graph's dispose/destroy method) and/or register a
FinalizationRegistry/WeakRef tied to this to automatically clearInterval when
the WarpGraph instance is GC'd; make sure to keep the existing unsubscribe()
behavior (clearing pollIntervalId and calling subscription.unsubscribe()) and to
reference pollIntervalId, pollInFlight, hasFrontierChanged, materialize, and
unsubscribe in your changes.

In `@src/domain/warp/sync.methods.js`:
- Around line 90-103: getFrontier currently awaits this._persistence.readRef
inside a for-loop causing N sequential I/O calls; change it to parallelize reads
by mapping writerIds (from discoverWriters) to promises that call
buildWriterRef(this._graphName, writerId) and
this._persistence.readRef(writerRef), await Promise.all on that array of
promises, then iterate the resulting tipSha values and call
updateFrontier(frontier, writerId, tipSha) for non-empty results; keep
createFrontier() and the final return as-is.
- Around line 364-461: The attempt-level timing in executeAttempt uses
Date.now() for attemptStart and for computing durationMs, which mixes real time
with the injected test clock used elsewhere (this._clock.now()); replace
Date.now() usages in executeAttempt (attemptStart and durationMs computation)
with this._clock.now() so all timing uses the injected clock (refer to
executeAttempt, attemptStart, durationMs, and this._clock).

- B41: auto-generate .d.ts from JSDoc to prevent declaration drift
- B42: CI .d.ts signature validation hook
- B43: explicit vitest excludes for runtime-specific test suites
- B44: subscriber unsubscribe-during-callback e2e test
- B45: type-level .d.ts conformance test
@github-actions
Copy link

Release Preflight

  • package version: 11.2.0
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.0, release workflow will publish.

- Replace allPatches.push(...patches) with loop (stack safety)
- Add _lastGCTime > 0 guard in _maybeRunGC (spurious first-run GC)
- Clear _stateDirty in applySyncResponse (sync state freshness)
- Fix createHighTombstoneState test helper to use Dot objects
- Bump version to 11.2.1, update CHANGELOG
@github-actions
Copy link

Release Preflight

  • package version: 11.2.1
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.1, release workflow will publish.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
src/domain/warp/sync.methods.js (2)

155-165: Staleness check in status() spreads the entire frontier into an array.

Line 161 uses [...frontier].every(...) which allocates an intermediate array of all frontier entries. For a lightweight status method documented as O(writers), this is fine for typical writer counts, but for...of with an early-exit would be more consistent with the stated goal and avoid the allocation.

♻️ Suggested simplification
-  } else if (this._stateDirty || !this._lastFrontier ||
-    frontier.size !== this._lastFrontier.size ||
-    ![...frontier].every(([w, sha]) => /** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) === sha)) {
-    cachedState = 'stale';
+  } else if (this._stateDirty || !this._lastFrontier ||
+    frontier.size !== this._lastFrontier.size) {
+    cachedState = 'stale';
+  } else {
+    let isStale = false;
+    for (const [w, sha] of frontier) {
+      if (this._lastFrontier.get(w) !== sha) {
+        isStale = true;
+        break;
+      }
+    }
+    cachedState = isStale ? 'stale' : 'fresh';
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 155 - 165, The staleness check
in status() currently uses [...frontier].every(...) which allocates an
intermediate array; replace that expression with a for...of loop over frontier
that compares each ([w, sha]) to this._lastFrontier.get(w) and sets cachedState
= 'stale' then breaks on the first mismatch (otherwise continue to set 'fresh'),
keeping the existing conditions (this._stateDirty, !this._lastFrontier,
frontier.size mismatch) intact and using the same Map lookup semantics to avoid
the allocation and allow early exit.

496-503: RetryExhaustedError cause is re-thrown, discarding retry metadata.

Line 503 throws err.cause (the last inner error), which loses the RetryExhaustedError.attempts count and any other retry context. Callers catching the re-thrown error won't know retries were exhausted — they'll see a plain SyncError or TimeoutError.

If this is intentional (to keep the public API surface simple), consider documenting it. Otherwise, wrapping or preserving the retry context would help consumers distinguish "first-attempt failure" from "gave up after N retries."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 496 - 503, The current catch
path for RetryExhaustedError discards retry metadata by throwing err.cause;
instead preserve retry context by re-throwing the RetryExhaustedError itself
(throw err) or wrap the inner cause while copying retry properties (e.g., create
a new Error that sets .cause = cause and .attempts = err.attempts or attach
attempts to cause) so callers can still read RetryExhaustedError.attempts;
modify the handler around RetryExhaustedError (the block referencing
RetryExhaustedError, err.attempts and onStatus) to either throw err or augment
the thrown error with the attempts and original cause before throwing.
src/domain/warp/checkpoint.methods.js (3)

29-85: Duplicated frontier-building pattern across multiple methods.

createCheckpoint (lines 36–46), syncCoverage (lines 109–116), _loadPatchesSince (lines 163), and also getFrontier in sync.methods.js all repeat the same discover-writers → iterate → readRef → collect pattern. Since getFrontier() already encapsulates this, consider reusing it in createCheckpoint and syncCoverage to reduce duplication.

Also applies to: 99-130, 90-103, 162-181

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/checkpoint.methods.js` around lines 29 - 85, The
frontier-building loop is duplicated; replace the manual
discoverWriters→for-loop→readRef logic in createCheckpoint, syncCoverage, and
_loadPatchesSince with a single call to the existing getFrontier() helper (use
await this.getFrontier()). Use the returned frontier map for updateFrontier
usage and derive parents as Object.values(frontier).filter(Boolean) (or
equivalent) where the code currently pushes sha into parents. Remove the
duplicated discovery/readRef loop and any local frontier initialization in those
functions so they reuse getFrontier() consistently.

147-151: Silent catch in _loadLatestCheckpoint hides corruption or I/O errors.

Returning null on any error (line 149–151) means callers can't distinguish "no checkpoint" from "checkpoint exists but is corrupted / unreadable." This could mask real issues during migration validation or incremental materialization.

At minimum, logging the error when a logger is available would aid debugging.

♻️ Suggested improvement
   try {
     return await loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec });
-  } catch {
+  } catch (err) {
+    if (this._logger) {
+      this._logger.warn('Failed to load checkpoint', { sha: checkpointSha, error: err });
+    }
     return null;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/checkpoint.methods.js` around lines 147 - 151, The current
silent catch in _loadLatestCheckpoint around the call to
loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }) hides
real I/O or corruption errors; change the catch to capture the error (e.g.,
catch (err)) and, if a logger is available on the instance (this._logger or
this.logger), call logger.error with a clear message and the error details
(include checkpointSha and persistence/context), then return null; if no logger
exists consider rethrowing non-ENOENT/not-found errors instead of swallowing
them.

283-285: Silent catch swallows errors without any observability.

The empty catch block is intentional ("GC failure never breaks materialize"), but silently discarding the error makes debugging GC issues difficult. Consider logging at debug/warn level when a logger is available.

♻️ Suggested improvement
-  } catch {
-    // GC failure never breaks materialize
+  } catch (err) {
+    // GC failure never breaks materialize
+    if (this._logger) {
+      this._logger.warn('_maybeRunGC failed silently', { error: err });
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/checkpoint.methods.js` around lines 283 - 285, The empty
catch that swallows errors (the block with the comment "GC failure never breaks
materialize") should capture the exception and log it instead of discarding it:
change the catch to catch (err) and emit a debug/warn-level log (using the
existing logger if available, e.g., logger.warn or this.logger.warn, falling
back to console.debug/console.warn) with a clear message like "GC failure during
materialize" plus the error details, but keep the behavior of not rethrowing so
materialize still proceeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 367-371: executeAttempt is using Date.now() for attemptStart while
earlier t0 uses this._clock.now(), causing mixed clock sources; update
executeAttempt to use this._clock.now() instead of Date.now() and change any
other local duration calculations in this function (including the per-attempt
durationMs logic referenced around where durationMs is computed) to derive from
this._clock.now(); locate and replace occurrences inside the executeAttempt
closure and the nearby duration computation (around the durationMs / attempt
timing code) so the injected clock controls all timing and metrics consistently.

---

Duplicate comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 258-262: The timeSinceCompaction calculation in inputMetrics
should use the same guard logic as maybeRunGC(): ensure you compute
timeSinceCompaction as this._lastGCTime > 0 ? this._clock.now() -
this._lastGCTime : 0 (as done now), and verify any other usages (e.g., in
maybeRunGC(), checkpointSave, or telemetry code paths) consistently use the same
_lastGCTime > 0 guard with this._clock.now() and the same field names
(_patchesSinceGC, _lastGCTime, _clock) to avoid negative or misleading values.
- Around line 175-177: Remove the stray "[duplicate_comment]" review artifact
and any duplicate review text from the comment block; no code changes to the
stack-safe loop (for (const p of patches) { allPatches.push(p); }) are
needed—just delete the duplicate marker so the review/comment is clean and not
reported twice.

In `@src/domain/warp/sync.methods.js`:
- Line 276: Ensure the dirty flag is cleared only after the sync response has
been fully applied: in the sync handler (the method that processes the sync
response—e.g., the function that applies the incoming patch/response), keep or
move the statement this._stateDirty = false so it runs after all state
updates/merges are completed, not before; also remove the stale
duplicate_comment marker left in the diff.

---

Nitpick comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 29-85: The frontier-building loop is duplicated; replace the
manual discoverWriters→for-loop→readRef logic in createCheckpoint, syncCoverage,
and _loadPatchesSince with a single call to the existing getFrontier() helper
(use await this.getFrontier()). Use the returned frontier map for updateFrontier
usage and derive parents as Object.values(frontier).filter(Boolean) (or
equivalent) where the code currently pushes sha into parents. Remove the
duplicated discovery/readRef loop and any local frontier initialization in those
functions so they reuse getFrontier() consistently.
- Around line 147-151: The current silent catch in _loadLatestCheckpoint around
the call to loadCheckpoint(this._persistence, checkpointSha, { codec:
this._codec }) hides real I/O or corruption errors; change the catch to capture
the error (e.g., catch (err)) and, if a logger is available on the instance
(this._logger or this.logger), call logger.error with a clear message and the
error details (include checkpointSha and persistence/context), then return null;
if no logger exists consider rethrowing non-ENOENT/not-found errors instead of
swallowing them.
- Around line 283-285: The empty catch that swallows errors (the block with the
comment "GC failure never breaks materialize") should capture the exception and
log it instead of discarding it: change the catch to catch (err) and emit a
debug/warn-level log (using the existing logger if available, e.g., logger.warn
or this.logger.warn, falling back to console.debug/console.warn) with a clear
message like "GC failure during materialize" plus the error details, but keep
the behavior of not rethrowing so materialize still proceeds.

In `@src/domain/warp/sync.methods.js`:
- Around line 155-165: The staleness check in status() currently uses
[...frontier].every(...) which allocates an intermediate array; replace that
expression with a for...of loop over frontier that compares each ([w, sha]) to
this._lastFrontier.get(w) and sets cachedState = 'stale' then breaks on the
first mismatch (otherwise continue to set 'fresh'), keeping the existing
conditions (this._stateDirty, !this._lastFrontier, frontier.size mismatch)
intact and using the same Map lookup semantics to avoid the allocation and allow
early exit.
- Around line 496-503: The current catch path for RetryExhaustedError discards
retry metadata by throwing err.cause; instead preserve retry context by
re-throwing the RetryExhaustedError itself (throw err) or wrap the inner cause
while copying retry properties (e.g., create a new Error that sets .cause =
cause and .attempts = err.attempts or attach attempts to cause) so callers can
still read RetryExhaustedError.attempts; modify the handler around
RetryExhaustedError (the block referencing RetryExhaustedError, err.attempts and
onStatus) to either throw err or augment the thrown error with the attempts and
original cause before throwing.

Replace Date.now() with this._clock.now() for attemptStart and
durationMs in executeAttempt closure, matching the outer t0 timing
and enabling injected-clock testability.
@flyingrobots
Copy link
Member Author

Round 3 Feedback — All Issues Resolved

Already addressed (CodeRabbit didn't mark ✅):

  1. discoverTicks return type (patch.methods.js:451, _wiredMethods.d.ts:85) — Both JSDoc and .d.ts already include tickShas: Record<number, string> in the perWriter type, fixed in commit 0d337e0.

  2. neighbors return shape (query.methods.js:144, _wiredMethods.d.ts:21) — .d.ts already uses nodeId (not neighborId) and includes direction: 'outgoing' | 'incoming', fixed in commit 0d337e0.

Newly fixed:

  1. Inconsistent clock source in syncWith (sync.methods.js:370,461) — Replaced Date.now() with this._clock.now() for attemptStart and durationMs, matching the outer t0 timing. Fixed in commit b3ec0b1.

@flyingrobots
Copy link
Member Author

@coderabbitai review please

@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

@flyingrobots I'll review the changes now. You've done excellent work systematically addressing all the previous feedback across multiple rounds!

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@github-actions
Copy link

Release Preflight

  • package version: 11.2.1
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.1, release workflow will publish.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/domain/warp/sync.methods.js (1)

449-456: Response shape validation doesn't inspect individual patch entries.

The validation confirms response.patches is an array but doesn't verify that each element has the expected { writerId, sha } shape before passing them to applySyncResponse (which iterates patches on line 265). Malformed entries are handled gracefully (the if (writerId && sha) guard on line 266 skips them), so this isn't a correctness issue — just noting that stricter upfront validation here could surface protocol bugs earlier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 449 - 456, The validation for
the sync response should be strengthened to ensure each entry in
response.patches conforms to the expected shape before calling
applySyncResponse: inside the existing validation block (where response.patches
is checked) iterate response.patches and verify each patch is an object and has
non-empty string properties writerId and sha (or whichever types your protocol
requires); if any entry fails, throw a SyncError with code 'E_SYNC_PROTOCOL' and
a clear message. Update the guard surrounding response.patches in the same
function so malformed patch entries are rejected early rather than relying on
applySyncResponse's per-entry guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 375-378: The direct-peer branch in sync method currently awaits
remote.processSyncRequest(request) without a timeout, so if isDirectPeer is true
the call can hang indefinitely; wrap the call to
remote.processSyncRequest(request) with the same timeout(timeoutMs, ...) utility
used for the HTTP path (preserving the emit('requestSent') and
emit('responseReceived') semantics and honoring existing signal/abort behavior)
so that processSyncRequest is rejected after timeoutMs; update the assignment to
response accordingly and reuse the timeoutMs and timeout() helper already
present in this module.

---

Nitpick comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 449-456: The validation for the sync response should be
strengthened to ensure each entry in response.patches conforms to the expected
shape before calling applySyncResponse: inside the existing validation block
(where response.patches is checked) iterate response.patches and verify each
patch is an object and has non-empty string properties writerId and sha (or
whichever types your protocol requires); if any entry fails, throw a SyncError
with code 'E_SYNC_PROTOCOL' and a clear message. Update the guard surrounding
response.patches in the same function so malformed patch entries are rejected
early rather than relying on applySyncResponse's per-entry guard.

Comment on lines +375 to +378
if (isDirectPeer) {
emit('requestSent');
response = await remote.processSyncRequest(request);
emit('responseReceived');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Direct-peer sync path has no timeout — could hang indefinitely.

When isDirectPeer is true, remote.processSyncRequest(request) is awaited without any timeout wrapper, unlike the HTTP path which is guarded by timeout(timeoutMs, ...). A misbehaving or slow peer would block the caller forever with no recourse (the outer signal is only checked before the call via checkAborted, not during).

Consider wrapping the direct-peer call with the same timeout() utility:

Suggested change
     if (isDirectPeer) {
       emit('requestSent');
-      response = await remote.processSyncRequest(request);
+      response = await timeout(timeoutMs, () => remote.processSyncRequest(request));
       emit('responseReceived');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (isDirectPeer) {
emit('requestSent');
response = await remote.processSyncRequest(request);
emit('responseReceived');
if (isDirectPeer) {
emit('requestSent');
response = await timeout(timeoutMs, () => remote.processSyncRequest(request));
emit('responseReceived');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 375 - 378, The direct-peer
branch in sync method currently awaits remote.processSyncRequest(request)
without a timeout, so if isDirectPeer is true the call can hang indefinitely;
wrap the call to remote.processSyncRequest(request) with the same
timeout(timeoutMs, ...) utility used for the HTTP path (preserving the
emit('requestSent') and emit('responseReceived') semantics and honoring existing
signal/abort behavior) so that processSyncRequest is rejected after timeoutMs;
update the assignment to response accordingly and reuse the timeoutMs and
timeout() helper already present in this module.

- B46: ESLint ban Date.now() in domain files
- B47: orsetAdd dot argument validation
@github-actions
Copy link

Release Preflight

  • package version: 11.2.1
  • prerelease: false
  • npm dist-tag on release: latest
  • npm pack dry-run: passed
  • jsr publish dry-run: passed

If you tag this commit as v11.2.1, release workflow will publish.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/domain/warp/sync.methods.js (3)

466-509: Retry error unwrapping discards metadata.

On line 498–503, when a RetryExhaustedError is caught, the code throws err.cause (the last failure), discarding the retry metadata (attempt count, all intermediate errors). Callers who catch the re-thrown error lose visibility into the fact that retries were exhausted.

Consider preserving the RetryExhaustedError as the thrown error (or at least as the cause of the re-thrown error), so callers can distinguish "first attempt failed" from "all retries exhausted":

Suggested change
    if (err instanceof RetryExhaustedError) {
      const cause = /** `@type` {Error} */ (err.cause || err);
      if (typeof onStatus === 'function') {
        onStatus({ type: 'failed', attempt: err.attempts, error: cause });
      }
-     throw cause;
+     cause.retryAttempts = err.attempts;
+     throw cause;
    }

Or alternatively, wrap it:

-     throw cause;
+     throw new SyncError(cause.message, {
+       code: cause.code || 'E_SYNC_RETRIES_EXHAUSTED',
+       context: { attempts: err.attempts },
+       cause,
+     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 466 - 509, The catch block
currently unwraps RetryExhaustedError and throws err.cause, losing retry
metadata; instead preserve the RetryExhaustedError (or wrap the cause and attach
the original RetryExhaustedError as its cause) so callers can detect retries
were exhausted. Concretely, in the catch branch that checks "if (err instanceof
RetryExhaustedError)" (symbol: RetryExhaustedError) stop throwing "cause"
directly — either rethrow "err" itself and pass its metadata to onStatus (use
err.attempts), or create a new Error that includes the original cause but sets
the RetryExhaustedError as the .cause property (so callers see both the final
error and the retry exhaustion metadata); update the onStatus call to use
err.attempts and ensure the thrown object preserves err (do not replace it with
err.cause).

420-441: Unconsumed response body on HTTP error status codes.

When res.status >= 400 or >= 500, the response body is never read or drained. In Node.js's fetch (undici), unconsumed bodies can prevent connection reuse and may trigger resource warnings under load.

Consider consuming the body before throwing:

Suggested fix
      if (res.status >= 500) {
+       await res.body?.cancel();
        throw new SyncError(`Remote error: ${res.status}`, {
          code: 'E_SYNC_REMOTE',
          context: { status: res.status },
        });
      }

      if (res.status >= 400) {
+       await res.body?.cancel();
        throw new SyncError(`Protocol error: ${res.status}`, {
          code: 'E_SYNC_PROTOCOL',
          context: { status: res.status },
        });
      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 420 - 441, The HTTP error
handling currently throws SyncError for res.status >= 400 / >= 500 without
reading the response body, which can leak resources in Node's fetch; before each
throw (the blocks that reference res and throw new SyncError) consume/drain the
body (e.g., await res.text() or similar inside a try/catch), include a short
snippet of the drained body (or its length/truncated preview) in the SyncError
context, and handle any errors from consuming the body gracefully so the
original SyncError is still thrown; update the two relevant error branches that
reference res.status and the subsequent JSON parsing site (where response =
await res.json()) to ensure the body is consumed only once and errors are
handled clearly.

155-165: Minor: spreading the entire frontier Map for the staleness check is wasteful.

![...frontier].every(...) allocates a temporary array of all frontier entries just to check equality. A simple for...of loop over the Map would allow early exit without the allocation.

Suggested refactor
-  } else if (this._stateDirty || !this._lastFrontier ||
-    frontier.size !== this._lastFrontier.size ||
-    ![...frontier].every(([w, sha]) => /** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) === sha)) {
-    cachedState = 'stale';
+  } else if (this._stateDirty || !this._lastFrontier ||
+    frontier.size !== this._lastFrontier.size ||
+    (() => { for (const [w, sha] of frontier) { if (/** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) !== sha) return true; } return false; })()) {
+    cachedState = 'stale';

Or better yet, extract the comparison into a small helper for readability:

function frontiersEqual(a, b) {
  if (a.size !== b.size) return false;
  for (const [w, sha] of a) {
    if (b.get(w) !== sha) return false;
  }
  return true;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 155 - 165, The staleness check
currently spreads frontier into an array for every(...) which allocates
unnecessarily; replace the spread/every expression in the cachedState
calculation (the condition using frontier and this._lastFrontier) with a
non-allocating comparison — either iterate with a for...of loop over frontier
and early-return false on mismatch, or extract a helper like frontiersEqual(a,b)
that checks sizes and then for...of a to compare b.get(key) to value; update the
cachedState branch that uses this._lastFrontier, frontier, this._stateDirty and
this._cachedState to call that helper or the loop instead of using
[...frontier].every(...).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 249-279: The code in applySyncResponse redundantly merges
response.patches into this._lastFrontier after applySyncResponseImpl has already
produced an updated frontier; replace the manual loop that iterates
response.patches with assigning the computed frontier from the impl: set
this._lastFrontier = result.frontier (ensuring types/shape match, e.g., a Map)
so _lastFrontier is updated atomically and you no longer iterate
response.patches; keep other updates (this._cachedState, _patchesSinceGC,
_stateDirty) intact.

---

Duplicate comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 375-378: The direct-peer branch (isDirectPeer) calls
remote.processSyncRequest without any timeout, so a slow/misbehaving peer can
hang the caller; wrap the call to remote.processSyncRequest in a timeout guard
(e.g., Promise.race with a configurable timeout or an AbortSignal-aware call if
remote supports cancellation) and throw a clear timeout error when expired so
callers don't wait forever; ensure emit('requestSent') still occurs before the
call and emit('responseReceived') (or an appropriate timeout/failed event) is
emitted only after a definitive outcome, and keep shouldRetry behavior unchanged
for direct peers.

---

Nitpick comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 466-509: The catch block currently unwraps RetryExhaustedError and
throws err.cause, losing retry metadata; instead preserve the
RetryExhaustedError (or wrap the cause and attach the original
RetryExhaustedError as its cause) so callers can detect retries were exhausted.
Concretely, in the catch branch that checks "if (err instanceof
RetryExhaustedError)" (symbol: RetryExhaustedError) stop throwing "cause"
directly — either rethrow "err" itself and pass its metadata to onStatus (use
err.attempts), or create a new Error that includes the original cause but sets
the RetryExhaustedError as the .cause property (so callers see both the final
error and the retry exhaustion metadata); update the onStatus call to use
err.attempts and ensure the thrown object preserves err (do not replace it with
err.cause).
- Around line 420-441: The HTTP error handling currently throws SyncError for
res.status >= 400 / >= 500 without reading the response body, which can leak
resources in Node's fetch; before each throw (the blocks that reference res and
throw new SyncError) consume/drain the body (e.g., await res.text() or similar
inside a try/catch), include a short snippet of the drained body (or its
length/truncated preview) in the SyncError context, and handle any errors from
consuming the body gracefully so the original SyncError is still thrown; update
the two relevant error branches that reference res.status and the subsequent
JSON parsing site (where response = await res.json()) to ensure the body is
consumed only once and errors are handled clearly.
- Around line 155-165: The staleness check currently spreads frontier into an
array for every(...) which allocates unnecessarily; replace the spread/every
expression in the cachedState calculation (the condition using frontier and
this._lastFrontier) with a non-allocating comparison — either iterate with a
for...of loop over frontier and early-return false on mismatch, or extract a
helper like frontiersEqual(a,b) that checks sizes and then for...of a to compare
b.get(key) to value; update the cachedState branch that uses this._lastFrontier,
frontier, this._stateDirty and this._cachedState to call that helper or the loop
instead of using [...frontier].every(...).

Comment on lines +249 to +279
export function applySyncResponse(response) {
if (!this._cachedState) {
throw new QueryError(E_NO_STATE_MSG, {
code: 'E_NO_STATE',
});
}

const currentFrontier = /** @type {any} */ (this._cachedState.observedFrontier); // TODO(ts-cleanup): narrow port type
const result = /** @type {{state: import('../services/JoinReducer.js').WarpStateV5, frontier: Map<string, string>, applied: number}} */ (applySyncResponseImpl(response, this._cachedState, currentFrontier));

// Update cached state
this._cachedState = result.state;

// Keep _lastFrontier in sync so hasFrontierChanged() won't misreport stale.
// Merge the response's per-writer tips into the stored frontier snapshot.
if (this._lastFrontier && Array.isArray(response.patches)) {
for (const { writerId, sha } of response.patches) {
if (writerId && sha) {
this._lastFrontier.set(writerId, sha);
}
}
}

// Track patches for GC
this._patchesSinceGC += result.applied;

// State is now in sync with the frontier — clear dirty flag
this._stateDirty = false;

return result;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find the applySyncResponseImpl function definition
fd -n "*.js" | xargs grep -l "applySyncResponseImpl"

Repository: git-stunts/git-warp

Length of output: 295


🏁 Script executed:

# Let's look at the structure and find applySyncResponseImpl definition
rg -n "export.*applySyncResponseImpl|function applySyncResponseImpl" --type=js -B 2 -A 15

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Check the applySyncResponse function more thoroughly to understand usage patterns
rg -n "applySyncResponse" --type=js -B 2 -A 10

Repository: git-stunts/git-warp

Length of output: 29947


🏁 Script executed:

# Let's look at result.frontier usage patterns
rg -n "result\.frontier|\.frontier" --type=js | head -50

Repository: git-stunts/git-warp

Length of output: 5779


🏁 Script executed:

# Find the applySyncResponseImpl implementation details
rg -n "export function applySyncResponse\|export const applySyncResponse" src/domain/services/SyncProtocol.js -A 50 | head -80

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Look at hasFrontierChanged to understand _lastFrontier usage
rg -n "hasFrontierChanged\|_lastFrontier" src/domain/warp/sync.methods.js -B 2 -A 8

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# See how _lastFrontier is initialized and used
rg -n "_lastFrontier" src/domain/warp/ -B 1 -A 1

Repository: git-stunts/git-warp

Length of output: 2403


🏁 Script executed:

# Get the full applySyncResponseImpl function
rg -n "export function applySyncResponse" src/domain/services/SyncProtocol.js -A 60 | head -100

Repository: git-stunts/git-warp

Length of output: 2673


🏁 Script executed:

# Check how result.frontier is actually populated in applySyncResponseImpl
rg -n "newFrontier\|return.*frontier" src/domain/services/SyncProtocol.js | head -30

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Look at what currentFrontier is - the observedFrontier from cachedState
rg -n "observedFrontier" src/domain/warp/ -B 1 -A 1

Repository: git-stunts/git-warp

Length of output: 1523


🏁 Script executed:

# Check if result.frontier is used anywhere else in the codebase
rg -n "result\.frontier" src/domain/warp/ src/domain/services/

Repository: git-stunts/git-warp

Length of output: 140


🏁 Script executed:

# Let's understand the difference between _cachedState.observedFrontier and _lastFrontier
rg -n "observedFrontier\|_lastFrontier" src/domain/warp/materialize.methods.js -B 2 -A 2

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Check the complete applySyncResponse function in SyncProtocol to understand frontier updates
cat -n src/domain/services/SyncProtocol.js | sed -n '467,510p'

Repository: git-stunts/git-warp

Length of output: 2048


🏁 Script executed:

# Look at how currentFrontier and _lastFrontier relate - are they ever different?
rg -n "getFrontier\(\)" src/domain/warp/ -B 2 -A 5

Repository: git-stunts/git-warp

Length of output: 5332


🏁 Script executed:

# Check if result.frontier is assigned to _cachedState.observedFrontier anywhere
rg -n "observedFrontier.*=" src/domain/warp/ -B 1 -A 1

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Check if _cachedState.observedFrontier is ever accessed after applySyncResponse
rg -n "_cachedState\.observedFrontier" src/domain/warp/ -B 2 -A 2

Repository: git-stunts/git-warp

Length of output: 1376


🏁 Script executed:

# Look at the full applySyncResponse function in the wrapper to see the complete picture
cat -n src/domain/warp/sync.methods.js | sed -n '249,280p'

Repository: git-stunts/git-warp

Length of output: 1386


🏁 Script executed:

# Check if result.frontier should actually be used to replace _lastFrontier
# Let's look at the return value to confirm result has frontier
cat -n src/domain/services/SyncProtocol.js | sed -n '505,510p'

Repository: git-stunts/git-warp

Length of output: 183


🏁 Script executed:

# Check the updateFrontier function to understand what happens when frontier is updated
rg -n "export.*updateFrontier\|function updateFrontier" src/domain/services/Frontier.js -A 10

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Verify the concern: does the manual merge achieve the same result as using result.frontier?
rg -n "cloneFrontier" src/domain/services/Frontier.js -A 8

Repository: git-stunts/git-warp

Length of output: 389


🏁 Script executed:

# Check if _cachedState.observedFrontier needs to be updated with result.frontier
# Look at how observedFrontier is used after applySyncResponse
rg -n "_cachedState" src/domain/warp/sync.methods.js | grep -A 2 -B 2 "applySyncResponse"

Repository: git-stunts/git-warp

Length of output: 506


🏁 Script executed:

# Check cloneStateV5 to see if observedFrontier is copied or referenced
rg -n "export.*cloneStateV5\|function cloneStateV5" src/domain/services/JoinReducer.js -A 15

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Let's trace if observedFrontier in _cachedState actually gets updated elsewhere after sync
rg -n "observedFrontier\s*=" src/domain/warp/ -B 2 -A 2

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Let's see if result.frontier and the manual merge produce the same result
# Check what happens when we look at the frontier returned vs what gets merged
cat -n src/domain/warp/sync.methods.js | sed -n '256,278p'

Repository: git-stunts/git-warp

Length of output: 1164


🏁 Script executed:

# Search for tests that verify the frontier is updated correctly after applySyncResponse
rg -n "applySyncResponse" test/unit/domain/WarpGraph.status.test.js -B 3 -A 10

Repository: git-stunts/git-warp

Length of output: 2507


🏁 Script executed:

# Final verification: does _cachedState.observedFrontier actually get updated with result.frontier anywhere?
rg -n "_cachedState\.observedFrontier\s*=" src/domain/warp/ -B 2 -A 2

Repository: git-stunts/git-warp

Length of output: 45


🏁 Script executed:

# Check if result.frontier should have been assigned to _cachedState.observedFrontier
# Look at how the state flows through applySyncResponseImpl
cat -n src/domain/services/SyncProtocol.js | sed -n '467,475p'

Repository: git-stunts/git-warp

Length of output: 503


🏁 Script executed:

# Check how _lastFrontier is initialized and what it represents
rg -n "_lastFrontier\s*=" src/domain/warp/ -B 2 -A 1

Repository: git-stunts/git-warp

Length of output: 647


🏁 Script executed:

# Check if there are any comments or tests explaining why manual merge is used
rg -n "manual.*merge|_lastFrontier.*frontier|response\.patches.*frontier" src/domain/warp/ test/ --type=js -B 1 -A 1

Repository: git-stunts/git-warp

Length of output: 45


Use result.frontier instead of manually merging patches into _lastFrontier.

The result from applySyncResponseImpl includes a computed frontier property that already reflects the updated frontier after applying patches (lines 499–501 in SyncProtocol.js). The current code (lines 264–270) redundantly processes response.patches again to manually update _lastFrontier. Simplifying to this._lastFrontier = result.frontier would be clearer and automatically handle format changes without requiring manual patch iteration logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/domain/warp/sync.methods.js` around lines 249 - 279, The code in
applySyncResponse redundantly merges response.patches into this._lastFrontier
after applySyncResponseImpl has already produced an updated frontier; replace
the manual loop that iterates response.patches with assigning the computed
frontier from the impl: set this._lastFrontier = result.frontier (ensuring
types/shape match, e.g., a Map) so _lastFrontier is updated atomically and you
no longer iterate response.patches; keep other updates (this._cachedState,
_patchesSinceGC, _stateDirty) intact.

@flyingrobots flyingrobots merged commit 6f1e882 into main Feb 17, 2026
7 checks passed
@flyingrobots flyingrobots deleted the warp-graph-decompose branch February 17, 2026 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant