refactor: decompose WarpGraph monolith into method modules#38
refactor: decompose WarpGraph monolith into method modules#38flyingrobots merged 14 commits intomainfrom
Conversation
Add API surface snapshot test, wiring invariant test, _internal.js barrel, and _wire.js prototype wiring helper. These locks must pass before and after every extraction phase.
Move hasNode, getNodeProps, getEdgeProps, neighbors, getNodes, getEdges, getPropertyCount, getStateSnapshot, query, observer, translationCost to src/domain/warp/query.methods.js. Wire via wireWarpMethods().
…pGraph Extract 8 method modules from WarpGraph.js (3260 → 416 lines, -87%): - subscribe.methods.js: subscribe, watch, _notifySubscribers - provenance.methods.js: patchesFor, materializeSlice, loadPatchBySha, etc. - fork.methods.js: fork, createWormhole, _isAncestor, etc. - sync.methods.js: getFrontier, syncWith, serve, status, etc. - checkpoint.methods.js: createCheckpoint, syncCoverage, runGC, etc. - patch.methods.js: createPatch, writer, discoverWriters, join, etc. - materialize.methods.js: materialize, _materializeGraph - materializeAdvanced.methods.js: materializeAt, _resolveCeiling, etc. WarpGraph.js retains: constructor, static open(), property getters, _logTiming(), and the wireWarpMethods() call. Fixes during wiring: QueryError default→named import in patch.methods, no-shadow for local 'patch' variables, unused import cleanup. All 3741 tests pass, lint clean.
- Create _wiredMethods.d.ts with module augmentation so tsc can see methods wired onto WarpGraph.prototype at runtime - Add src/**/*.d.ts to tsconfig.json include - Remove @Private from _logTiming (wired methods need access) - Fix MaterializedGraph reference in materializeAdvanced.methods.js - Add @ts-nocheck to Phase 0 tests (test dynamic prototype behavior) tsc --noEmit now passes with 0 errors.
Blanket src/**/*.d.ts caused GraphNode.d.ts and visualization/index.d.ts to shadow their .js counterparts, breaking ESLint project resolution.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughSplits WarpGraph into focused domain modules under src/domain/warp/, adds shared internals and a wiring utility, provides a TypeScript augmentation for wired methods, expands ESLint scope, and implements materialize/patch/fork/sync/query/subscribe/provenance/checkpoint/GC features plus tests and version bumps. (49 words) Changes
sequenceDiagram
participant Client as Client
participant WG as WarpGraph
participant Store as Persistence
participant Remote as RemotePeer
participant HTTP as HttpServer
Client->>WG: syncWith(remote, options)
WG->>Store: getFrontier()
WG->>WG: createSyncRequest()
alt remote is direct peer
WG->>Remote: send sync request
Remote->>WG: respond with sync response
else remote is HTTP
WG->>HTTP: POST /sync (with auth headers)
HTTP->>WG: processSyncRequest()
WG->>HTTP: return sync response
end
WG->>Store: applySyncResponse() -> persist patches
WG->>WG: maybe materialize() / update frontier / notify subscribers
WG->>Client: return sync result/status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@src/domain/warp/_wiredMethods.d.ts`:
- Around line 15-99: The declaration signatures in WarpGraph are out of sync
with implementations: update translationCost to return Promise<{cost: number;
breakdown: any}>, change _loadPatchesBySha to Promise<Array<{patch: any; sha:
string}>>, expand createWormhole’s return to include fromSha, toSha, writerId,
payload and patchCount (e.g. Promise<{fromSha: string; toSha: string; writerId?:
string; payload?: any; patchCount: number}>), adjust patch’s callback type in
patch(build: ...) to accept (p: PatchBuilderV2) => void | Promise<void>, and
change _setMaterializedState to return Promise<{state: any; stateHash: string;
adjacency: any}> so the declarations (functions: translationCost,
_loadPatchesBySha, createWormhole, patch, _setMaterializedState) match their
implementations.
In `@src/domain/warp/materialize.methods.js`:
- Around line 49-51: The early return for the ceiling path currently bypasses
the try/catch and timing logic; move the ceiling check and the call to
this._materializeWithCeiling(ceiling, !!collectReceipts, t0) inside the existing
try block so the call is wrapped by the same try/catch and the
finally/after-catch timing/logging (this._logTiming('materialize', t0, { error
})) always runs; ensure you preserve the boolean coercion of collectReceipts,
rethrow the error after logging in the catch, and return the result as the main
path does.
In `@src/domain/warp/materializeAdvanced.methods.js`:
- Around line 24-43: The JSDoc expects "in" semantics but _resolveCeiling
currently checks options.ceiling !== undefined; change the conditional in the
_resolveCeiling function to test for the presence of the key (use 'ceiling' in
options) so that an explicit { ceiling: undefined } is treated as an override
and returns options.ceiling, otherwise fall back to this._seekCeiling.
In `@src/domain/warp/patch.methods.js`:
- Around line 396-450: Update the discoverTicks return type to include tickShas:
amend the JSDoc for function discoverTicks so perWriter entries are typed as {
ticks: number[], tipSha: string | null, tickShas: Record<number, string> } and
the overall return as { ticks: number[], maxTick: number, perWriter: Map<string,
{ticks:number[], tipSha:string|null, tickShas: Record<number,string>}> }; also
update the declaration in _wiredMethods.d.ts for the discoverTicks method to
match the same corrected return signature (replace the incorrect perWriter:
Map<string, number> with the Map to the proper object shape containing
tickShas).
In `@src/domain/warp/query.methods.js`:
- Around line 116-144: The runtime neighbors function returns objects shaped {
nodeId, label, direction } but the TypeScript declaration in _wiredMethods.d.ts
(around the currently noted line) declares Array<{ neighborId: string; label:
string }>; update the declaration to match the implementation by changing the
return type to Array<{ nodeId: string; label: string; direction: 'outgoing' |
'incoming' }>, ensuring the exported signature for neighbors aligns with the
runtime shape and the direction literal types match those emitted by neighbors.
In `@src/domain/warp/subscribe.methods.js`:
- Around line 246-271: The loop over this._subscribers using for...of is
vulnerable to mutations (e.g., unsubscribe() that splices the array) which can
cause skipping subscribers; fix by iterating a shallow copy (e.g., snapshot =
[...this._subscribers]) and loop over that snapshot when invoking
subscriber.onChange / handling subscriber.pendingReplay, while preserving
existing error handling for subscriber.onError so callbacks can safely mutate
the original this._subscribers without breaking delivery.
🧹 Nitpick comments (12)
src/domain/warp/_wire.js (1)
32-47: Consider also checking for collisions against existing prototype members.Duplicate detection covers cross-module conflicts but not overwrites of methods/getters already defined on
Class.prototype(e.g.,_logTiming, property getters). An accidental export with a matching name would silently replace the original.♻️ Proposed addition before the module loop
+ const existing = new Set(Object.getOwnPropertyNames(Class.prototype)); + for (let i = 0; i < methodModules.length; i++) { const mod = methodModules[i]; for (const [name, fn] of Object.entries(mod)) { if (typeof fn !== 'function') { continue; } + if (existing.has(name)) { + throw new Error( + `wireWarpMethods: method "${name}" already exists on ${Class.name}.prototype — ` + + `attempted to overwrite from module index ${i}` + ); + } + if (seen.has(name)) {test/unit/domain/WarpGraph.apiSurface.test.js (1)
49-56: Minor: accessor pair classification.If a property has both a getter and a setter, it would be labeled as
'getter'only. Consider handling the combined case for accuracy:- summary[name] = { - type: isGetter ? 'getter' : (isSetter ? 'setter' : 'method'), + summary[name] = { + type: (isGetter && isSetter) ? 'accessor' : isGetter ? 'getter' : isSetter ? 'setter' : 'method',Not a correctness issue for snapshot stability, just a labeling accuracy nit.
src/domain/warp/materialize.methods.js (2)
96-100: Spread ofwriterPatchesintoallPatchescould blow the stack for large writer histories.
allPatches.push(...writerPatches)passes all elements as arguments topush. If a single writer has a very large patch chain (tens of thousands+), this hits the call-stack argument limit.♻️ Safer alternative
for (const writerId of writerIds) { const writerPatches = await this._loadWriterPatches(writerId); - allPatches.push(...writerPatches); + for (const p of writerPatches) { + allPatches.push(p); + } }Or use
Array.prototype.concatwith reassignment if the pattern is preferred.
180-186:_materializeGraphappears to redundantly call_setMaterializedState.
materialize()already calls_setMaterializedState(state)on line 129. After that,this._materializedGraph.state === stateshould be true (same reference), making theifblock at line 182-184 a no-op in the normal flow. If this is intentional as a defensive guard, a brief comment would clarify the intent.src/domain/warp/fork.methods.js (1)
237-255:_isAncestorhas no upper bound on chain walk — potential infinite loop on corrupt data.If the commit graph has a cycle (corrupt repo), this loop will never terminate. While the "linear per-writer chain" assumption is documented, a safety bound (e.g., max iterations) would make this more resilient in production without changing the happy-path semantics.
🛡️ Suggested safety bound
export async function _isAncestor(ancestorSha, descendantSha) { if (!ancestorSha || !descendantSha) { return false; } if (ancestorSha === descendantSha) { return true; } let cur = descendantSha; - while (cur) { + const MAX_WALK = 100_000; + let steps = 0; + while (cur) { + if (++steps > MAX_WALK) { + throw new Error(`_isAncestor: exceeded ${MAX_WALK} steps — possible cycle`); + } const nodeInfo = await this._persistence.getNodeInfo(cur); const parent = nodeInfo.parents?.[0] ?? null; if (parent === ancestorSha) { return true; } cur = parent; } return false; }src/domain/warp/materializeAdvanced.methods.js (1)
282-339:materializeAtpatchLoader silently stops on non-patch commits mid-chain.At line 308–310, if a non-patch commit (e.g., checkpoint or anchor) sits in the middle of a writer's chain, the loader stops collecting and returns only the patches above it. This means patches below that non-patch commit in the range
(fromSha, toSha]would be silently dropped. If this is intentional (checkpoints act as replay boundaries), a brief comment would help future maintainers.src/domain/warp/patch.methods.js (1)
295-325:createWriterunconditionally writes toconsole.warn— noisy in production.Line 300 uses
console.warnregardless of environment. Since the logger warning at line 297 already handles this when a logger is configured, theconsole.warncould be limited to when no logger is present, or removed in favor of the structured logger-only approach.Reduce noise in production
if (this._logger) { this._logger.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.'); + } else { + // eslint-disable-next-line no-console + console.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.'); } - // eslint-disable-next-line no-console - console.warn('[warp] createWriter() is deprecated. Use writer() or writer(id) instead.');src/domain/warp/checkpoint.methods.js (2)
252-284:_maybeRunGCandrunGCuseDate.now()instead ofthis._clock.now()— inconsistent with the rest of the codebase.Other methods (e.g.,
createCheckpoint,runGC's own_logTiming) usethis._clock.now()for timing, which supports test injection. Lines 259, 270, 312, and 356 all useDate.now()directly for GC timing, making GC time-dependent behavior non-mockable in tests.Use injected clock consistently
const inputMetrics = { ...metrics, patchesSinceCompaction: this._patchesSinceGC, - timeSinceCompaction: Date.now() - this._lastGCTime, + timeSinceCompaction: this._clock.now() - this._lastGCTime, };And similarly for
this._lastGCTime = Date.now()→this._lastGCTime = this._clock.now()on lines 270, 312, and 356.
381-395:getGCMetricsspreads raw metrics then adds aliases — resulting object has duplicate fields under different names.The
...rawMetricsspread includes original fields likenodeLiveDots,edgeLiveDots,totalTombstones, etc. Then lines 389–391 addnodeCount,edgeCount,tombstoneCountas aliases. The returned object contains both the original and aliased names, which may confuse consumers.Return only the canonical shape
const rawMetrics = collectGCMetrics(this._cachedState); return { - ...rawMetrics, nodeCount: rawMetrics.nodeLiveDots, edgeCount: rawMetrics.edgeLiveDots, tombstoneCount: rawMetrics.totalTombstones, + tombstoneRatio: rawMetrics.tombstoneRatio, patchesSinceCompaction: this._patchesSinceGC, lastCompactionTime: this._lastGCTime, };src/domain/warp/sync.methods.js (1)
150-186:status()callsgetFrontier()twice — once indirectly viahasFrontierChanged()and once directly.When
_stateDirtyisfalse, line 156 callshasFrontierChanged()which internally callsgetFrontier(), then line 173 callsgetFrontier()again. Each call does a full ref discovery, so this doubles the I/O for the common "fresh" case.Reuse the frontier
export async function status() { + const frontier = await this.getFrontier(); + /** `@type` {'fresh' | 'stale' | 'none'} */ let cachedState; if (this._cachedState === null) { cachedState = 'none'; - } else if (this._stateDirty || await this.hasFrontierChanged()) { + } else if (this._stateDirty || !this._lastFrontier || + frontier.size !== this._lastFrontier.size || + ![...frontier].every(([w, sha]) => this._lastFrontier.get(w) === sha)) { cachedState = 'stale'; } else { cachedState = 'fresh'; } // ... - const frontier = await this.getFrontier(); const writers = frontier.size;src/domain/warp/provenance.methods.js (2)
11-11: InconsistentQueryErrorimport — direct path vs. barrel.This file imports
QueryErrordirectly from'../errors/QueryError.js'while every other method module imports it from'./_internal.js'. If the re-export in_internal.jsis ever changed or wrapped, this file would diverge silently.Align with the barrel pattern
-import QueryError from '../errors/QueryError.js'; +import { QueryError } from './_internal.js';
155-194:_computeBackwardConeassumes_provenanceIndexis non-null but doesn't guard itself.This function is exported and callable directly. Line 170 casts
this._provenanceIndexwithout a null check. If invoked outsidematerializeSlice(which does the guard), this would throw a non-descriptive error.Add a defensive guard
export async function _computeBackwardCone(nodeId) { + if (!this._provenanceIndex) { + throw new QueryError('No provenance index. Call materialize() first.', { + code: 'E_NO_STATE', + }); + } const cone = new Map(); const visited = new Set();
| export async function discoverTicks() { | ||
| const writerIds = await this.discoverWriters(); | ||
| /** @type {Set<number>} */ | ||
| const globalTickSet = new Set(); | ||
| const perWriter = new Map(); | ||
|
|
||
| for (const writerId of writerIds) { | ||
| const writerRef = buildWriterRef(this._graphName, writerId); | ||
| const tipSha = await this._persistence.readRef(writerRef); | ||
| const writerTicks = []; | ||
| /** @type {Record<number, string>} */ | ||
| const tickShas = {}; | ||
|
|
||
| if (tipSha) { | ||
| let currentSha = tipSha; | ||
| let lastLamport = Infinity; | ||
|
|
||
| while (currentSha) { | ||
| const nodeInfo = await this._persistence.getNodeInfo(currentSha); | ||
| const kind = detectMessageKind(nodeInfo.message); | ||
| if (kind !== 'patch') { | ||
| break; | ||
| } | ||
|
|
||
| const patchMeta = decodePatchMessage(nodeInfo.message); | ||
| globalTickSet.add(patchMeta.lamport); | ||
| writerTicks.push(patchMeta.lamport); | ||
| tickShas[patchMeta.lamport] = currentSha; | ||
|
|
||
| // Check monotonic invariant (walking newest->oldest, lamport should decrease) | ||
| if (patchMeta.lamport > lastLamport && this._logger) { | ||
| this._logger.warn(`[warp] non-monotonic lamport for writer ${writerId}: ${patchMeta.lamport} > ${lastLamport}`); | ||
| } | ||
| lastLamport = patchMeta.lamport; | ||
|
|
||
| if (nodeInfo.parents && nodeInfo.parents.length > 0) { | ||
| currentSha = nodeInfo.parents[0]; | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| perWriter.set(writerId, { | ||
| ticks: writerTicks.reverse(), | ||
| tipSha: tipSha || null, | ||
| tickShas, | ||
| }); | ||
| } | ||
|
|
||
| const ticks = [...globalTickSet].sort((a, b) => a - b); | ||
| const maxTick = ticks.length > 0 ? ticks[ticks.length - 1] : 0; | ||
|
|
||
| return { ticks, maxTick, perWriter }; | ||
| } |
There was a problem hiding this comment.
discoverTicks return type in both JSDoc and .d.ts is incomplete/incorrect.
The function returns tickShas in the perWriter entries (line 439–443), but the JSDoc at line 389 declares perWriter: Map<string, {ticks: number[], tipSha: string|null}> — omitting tickShas. Additionally, _wiredMethods.d.ts line 85 declares it as perWriter: Map<string, number>, which is completely wrong.
Fix JSDoc (line 389) and _wiredMethods.d.ts (line 85)
In patch.methods.js, update the JSDoc:
-* perWriter: Map<string, {ticks: number[], tipSha: string|null}>
+* perWriter: Map<string, {ticks: number[], tipSha: string|null, tickShas: Record<number, string>}>In _wiredMethods.d.ts, update line 85:
- discoverTicks(): Promise<{ ticks: number[]; maxTick: number; perWriter: Map<string, number> }>;
+ discoverTicks(): Promise<{ ticks: number[]; maxTick: number; perWriter: Map<string, { ticks: number[]; tipSha: string | null; tickShas: Record<number, string> }> }>;🤖 Prompt for AI Agents
In `@src/domain/warp/patch.methods.js` around lines 396 - 450, Update the
discoverTicks return type to include tickShas: amend the JSDoc for function
discoverTicks so perWriter entries are typed as { ticks: number[], tipSha:
string | null, tickShas: Record<number, string> } and the overall return as {
ticks: number[], maxTick: number, perWriter: Map<string, {ticks:number[],
tipSha:string|null, tickShas: Record<number,string>}> }; also update the
declaration in _wiredMethods.d.ts for the discoverTicks method to match the same
corrected return signature (replace the incorrect perWriter: Map<string, number>
with the Map to the proper object shape containing tickShas).
| export async function neighbors(nodeId, direction = 'both', edgeLabel = undefined) { | ||
| await this._ensureFreshState(); | ||
| const s = /** @type {import('../services/JoinReducer.js').WarpStateV5} */ (this._cachedState); | ||
|
|
||
| /** @type {Array<{nodeId: string, label: string, direction: 'outgoing' | 'incoming'}>} */ | ||
| const result = []; | ||
|
|
||
| for (const edgeKey of orsetElements(s.edgeAlive)) { | ||
| const { from, to, label } = decodeEdgeKey(edgeKey); | ||
|
|
||
| if (edgeLabel !== undefined && label !== edgeLabel) { | ||
| continue; | ||
| } | ||
|
|
||
| if ((direction === 'outgoing' || direction === 'both') && from === nodeId) { | ||
| if (orsetContains(s.nodeAlive, to)) { | ||
| result.push({ nodeId: to, label, direction: /** @type {const} */ ('outgoing') }); | ||
| } | ||
| } | ||
|
|
||
| if ((direction === 'incoming' || direction === 'both') && to === nodeId) { | ||
| if (orsetContains(s.nodeAlive, from)) { | ||
| result.push({ nodeId: from, label, direction: /** @type {const} */ ('incoming') }); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return result; | ||
| } |
There was a problem hiding this comment.
neighbors return shape (nodeId + direction) doesn't match the TypeScript declaration.
The implementation returns { nodeId, label, direction } but _wiredMethods.d.ts line 21 declares the return as Array<{ neighborId: string; label: string }> — using neighborId instead of nodeId and omitting the direction field. TypeScript consumers will see the wrong field name.
One of these needs to be updated to match the other. If the implementation is correct:
Fix in _wiredMethods.d.ts (line 21)
- neighbors(nodeId: string, direction?: string, edgeLabel?: string): Promise<Array<{ neighborId: string; label: string }>>;
+ neighbors(nodeId: string, direction?: 'outgoing' | 'incoming' | 'both', edgeLabel?: string): Promise<Array<{ nodeId: string; label: string; direction: 'outgoing' | 'incoming' }>>;🤖 Prompt for AI Agents
In `@src/domain/warp/query.methods.js` around lines 116 - 144, The runtime
neighbors function returns objects shaped { nodeId, label, direction } but the
TypeScript declaration in _wiredMethods.d.ts (around the currently noted line)
declares Array<{ neighborId: string; label: string }>; update the declaration to
match the implementation by changing the return type to Array<{ nodeId: string;
label: string; direction: 'outgoing' | 'incoming' }>, ensuring the exported
signature for neighbors aligns with the runtime shape and the direction literal
types match those emitted by neighbors.
Release Preflight
If you tag this commit as |
…ng, safety Actionable issues (6): - Fix 7 return-type/signature mismatches in _wiredMethods.d.ts (neighbors, translationCost, _loadPatchesBySha, createWormhole, patch callback, _setMaterializedState, discoverTicks) - Move ceiling check inside try/catch in materialize() so errors are logged - Fix _resolveCeiling to use 'ceiling' in options (matches JSDoc semantics) - Add tickShas to discoverTicks JSDoc perWriter type - Iterate shallow copy of _subscribers to prevent mutation-during-dispatch - Fix createWriter to only console.warn when no logger is configured Nitpick improvements (9): - Add prototype collision check in wireWarpMethods - Replace allPatches.push(...writerPatches) with loop to avoid stack overflow - Use this._clock.now() consistently instead of Date.now() in GC timing - Remove duplicate fields from getGCMetrics (return canonical shape only) - Eliminate double getFrontier() call in status() - Align provenance.methods.js import to use _internal.js barrel - Add defensive guard in _computeBackwardCone for null provenance index - Add 100k step safety bound to _isAncestor to prevent infinite loops
- _resolveCeiling: coalesce undefined to null when using 'in' operator - status(): add JSDoc cast for _lastFrontier to satisfy null narrowing
Release Preflight
If you tag this commit as |
The --node-modules-dir flag used in Docker Deno tests disallows HTTPS URL imports. JSR specifiers work correctly in this mode.
|
@coderabbitai review please 🥕 Addressed in commits
|
|
Particularly nice to see:
I'll review the updated changes now. ✅ Actions performedReview triggered.
|
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (9)
src/domain/warp/fork.methods.js (1)
105-107:Date.now()used for fork name instead ofthis._clock.now().All other timing in this file uses
this._clock.now()(lines 49, 161, 167), but the fork name generation on line 107 usesDate.now(). If the injected clock is a test double or monotonic clock, this inconsistency could produce surprising fork names in tests.This is minor since fork names are just identifiers, but worth noting for test determinism.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/fork.methods.js` around lines 105 - 107, The fork name generation uses Date.now() which is inconsistent with the injected clock used elsewhere; change the resolvedForkName computation to use this._clock.now() instead of Date.now() (keep the fallback behaviour using forkName and preserve the random suffix generation) so that resolvedForkName, forkName and code paths that reference this._clock.now() remain deterministic in tests.src/domain/warp/_wiredMethods.d.ts (1)
48-48:_relationToCheckpointHeadreturn type could be narrowed to a string literal union.The implementation returns
'same' | 'ahead' | 'behind' | 'diverged', but line 48 declaresPromise<string>. Narrowing the return type would give consumers better type safety.✏️ Narrower return type
- _relationToCheckpointHead(ckHead: string, incomingSha: string): Promise<string>; + _relationToCheckpointHead(ckHead: string, incomingSha: string): Promise<'same' | 'ahead' | 'behind' | 'diverged'>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/_wiredMethods.d.ts` at line 48, Update the _relationToCheckpointHead declaration to reflect the exact string-literal union returned by its implementation: change the return type from Promise<string> to Promise<'same' | 'ahead' | 'behind' | 'diverged'> so callers get precise type safety for the possible results of _relationToCheckpointHead.src/domain/warp/patch.methods.js (1)
486-493: Misleading comment:propsChangedcounts new props too, not just changed ones.The comment on line 486 says "keys that existed in both but have different values," but the condition
!oldReg(line 490) also counts properties that are entirely new fromotherState. Consider updating the comment or renaming topropsAddedOrChangedfor clarity.✏️ Proposed comment fix
- // Count property changes (keys that existed in both but have different values) - let propsChanged = 0; + // Count properties that are new or changed after merge + let propsChanged = 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/patch.methods.js` around lines 486 - 493, The comment above the loop is inaccurate: the loop increments propsChanged for both new properties and changed values; update the comment to reflect this (e.g., "Count properties that are new or have changed values") or rename the variable propsChanged to propsAddedOrChanged and update all references in this function (search for propsChanged, mergedState.prop, and this._cachedState.prop) so the name matches the behavior.src/domain/warp/materialize.methods.js (2)
182-188:_materializeGraphcalls_setMaterializedStateredundantly on first invocation.
materialize()(line 131) already calls_setMaterializedState(state). When_materializeGraph()is first called andthis._materializedGraphis null, line 185 calls it again. This is safe (idempotent) but wastes a hash computation and adjacency rebuild.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/materialize.methods.js` around lines 182 - 188, _materializeGraph currently calls _setMaterializedState redundantly because materialize() already sets it; update _materializeGraph so it only calls _setMaterializedState when there is an existing materialized graph and the stored state differs. In practice, change the condition around the call to _setMaterializedState to require this._materializedGraph to be truthy (e.g. if (this._materializedGraph && this._materializedGraph.state !== state) { await this._setMaterializedState(state); }) so the initial invocation (when this._materializedGraph is null) does not recompute hashes/adjacencies. Ensure you reference the same types used today for the state argument.
152-161:cloneStateV5is called on everymaterialize(), even with zero subscribers.Line 161 always clones the full state to update
_lastNotifiedState, regardless of whether any subscribers exist. For large graphs, this is a non-trivial allocation. Consider guarding it:⚡ Skip clone when no subscribers
- // Notify subscribers if state changed since last notification - // Also handles deferred replay for subscribers added with replay: true before cached state - if (this._subscribers.length > 0) { - const hasPendingReplay = this._subscribers.some(s => s.pendingReplay); - const diff = diffStates(this._lastNotifiedState, state); - if (!isEmptyDiff(diff) || hasPendingReplay) { - this._notifySubscribers(diff, state); - } + if (this._subscribers.length > 0) { + const hasPendingReplay = this._subscribers.some(s => s.pendingReplay); + const diff = diffStates(this._lastNotifiedState, state); + if (!isEmptyDiff(diff) || hasPendingReplay) { + this._notifySubscribers(diff, state); + } + // Clone state to prevent eager path mutations from affecting the baseline + this._lastNotifiedState = cloneStateV5(state); + } else { + // Reset baseline so first subscriber gets a correct diff + this._lastNotifiedState = null; } - // Clone state to prevent eager path mutations from affecting the baseline - this._lastNotifiedState = cloneStateV5(state);Note: this requires
diffStatesto handlenullas thebeforeargument (which it already does, persubscribe.methods.jsline 61).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/materialize.methods.js` around lines 152 - 161, The code always calls cloneStateV5(state) and assigns to this._lastNotifiedState on every materialize(), causing unnecessary allocations when there are no subscribers; change materialize() to only update this._lastNotifiedState (via cloneStateV5) when there are subscribers or when you need a non-null baseline (i.e., guard the clone behind the same subscriber check used for notification: this._subscribers.length > 0 or hasPendingReplay), relying on diffStates accepting null as the before argument when no baseline is kept. Ensure you reference and update the same symbols: materialize, this._subscribers, hasPendingReplay usage, diffStates, cloneStateV5, and this._lastNotifiedState.src/domain/warp/checkpoint.methods.js (1)
147-152:_loadLatestCheckpointsilently swallows all checkpoint loading errors.Line 149's bare
catchreturnsnullfor any failure, including corruption, permission errors, or I/O failures. This means corrupted checkpoints are silently skipped, falling back to full re-materialization without any diagnostic. Consider logging the error before returningnull.🔍 Proposed: log before swallowing
try { return await loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }); - } catch { + } catch (err) { + if (this._logger) { + this._logger.warn('[warp] Failed to load checkpoint, falling back to full materialization', { + checkpointSha, + error: /** `@type` {Error} */ (err).message, + }); + } return null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/checkpoint.methods.js` around lines 147 - 152, The current _loadLatestCheckpoint implementation swallows all errors from loadCheckpoint and returns null; modify the catch to capture the error (e.g. catch (err)) and log diagnostic information before returning null so corrupted/IO/permission issues are visible. Locate the call to loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }) inside _loadLatestCheckpoint and replace the bare catch with one that logs the error (use this._logger.error(...) if a logger exists, otherwise console.error with a message including checkpointSha and the error) and then return null.src/domain/warp/subscribe.methods.js (1)
193-232: Polling timer is not cleared if thewatchcaller never callsunsubscribe().If the
WarpGraphinstance is disposed or the watch is abandoned without callingunsubscribe(), thesetIntervalkeeps running indefinitely, holding a reference tothis(the WarpGraph) and periodically callinghasFrontierChanged()+materialize(). This is a resource leak for long-lived processes.Consider documenting this expectation prominently, or using
WeakRef/cleanup pattern so the timer self-cancels when the graph is GC'd.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/subscribe.methods.js` around lines 193 - 232, The polling interval created in the watch flow (pollIntervalId with setInterval) can outlive the WarpGraph instance and is never cleared unless unsubscribe() is called, leaking resources; modify the implementation to ensure the timer is cleared when the graph is disposed or collected—either add a clear method invocation from the WarpGraph disposal routine (call clearInterval(pollIntervalId) and set pollIntervalId = null inside the graph's dispose/destroy method) and/or register a FinalizationRegistry/WeakRef tied to this to automatically clearInterval when the WarpGraph instance is GC'd; make sure to keep the existing unsubscribe() behavior (clearing pollIntervalId and calling subscription.unsubscribe()) and to reference pollIntervalId, pollInFlight, hasFrontierChanged, materialize, and unsubscribe in your changes.src/domain/warp/sync.methods.js (2)
90-103:getFrontier()makes N sequential I/O calls — one per writer.Each
readRefcall is awaited sequentially inside the loop. For graphs with many writers, this could be slow. APromise.allwould parallelize the reads.This is a recommended optimization, not blocking.
⚡ Parallelize ref reads
export async function getFrontier() { const writerIds = await this.discoverWriters(); const frontier = createFrontier(); - for (const writerId of writerIds) { - const writerRef = buildWriterRef(this._graphName, writerId); - const tipSha = await this._persistence.readRef(writerRef); - if (tipSha) { - updateFrontier(frontier, writerId, tipSha); - } - } + await Promise.all(writerIds.map(async (writerId) => { + const writerRef = buildWriterRef(this._graphName, writerId); + const tipSha = await this._persistence.readRef(writerRef); + if (tipSha) { + updateFrontier(frontier, writerId, tipSha); + } + })); return frontier; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 90 - 103, getFrontier currently awaits this._persistence.readRef inside a for-loop causing N sequential I/O calls; change it to parallelize reads by mapping writerIds (from discoverWriters) to promises that call buildWriterRef(this._graphName, writerId) and this._persistence.readRef(writerRef), await Promise.all on that array of promises, then iterate the resulting tipSha values and call updateFrontier(frontier, writerId, tipSha) for non-empty results; keep createFrontier() and the final return as-is.
364-461:Date.now()on line 367 vsthis._clock.now()on line 309.The outer timing (
t0) uses the injected clock, butattemptStarton line 367 usesDate.now()directly. Similarly, line 458 computesdurationMsfromDate.now(). If the injected clock is a test stub, attempt-level durations will use the real clock while the overall timing uses the stub, making test assertions unpredictable.✏️ Use consistent clock source
- const attemptStart = Date.now(); + const attemptStart = this._clock.now(); ... - const durationMs = Date.now() - attemptStart; + const durationMs = this._clock.now() - attemptStart;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 364 - 461, The attempt-level timing in executeAttempt uses Date.now() for attemptStart and for computing durationMs, which mixes real time with the injected test clock used elsewhere (this._clock.now()); replace Date.now() usages in executeAttempt (attemptStart and durationMs computation) with this._clock.now() so all timing uses the injected clock (refer to executeAttempt, attemptStart, durationMs, and this._clock).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 162-179: In _loadPatchesSince, avoid using the spread operator
when merging patches into allPatches because allPatches.push(...patches) can
exceed the engine's argument limit; replace that spread with an explicit loop
(e.g., for (const p of patches) allPatches.push(p)) or use array concatenation
in a non-variadic way so very large patches arrays from _loadWriterPatches do
not cause a stack/argument overflow.
- Around line 252-284: The _maybeRunGC function computes timeSinceCompaction
using this._clock.now() - this._lastGCTime without guarding for this._lastGCTime
=== 0, which can spuriously trigger GC; update _maybeRunGC so
timeSinceCompaction uses the same guard as maybeRunGC (e.g.,
timeSinceCompaction: this._lastGCTime > 0 ? this._clock.now() - this._lastGCTime
: 0) when building inputMetrics for shouldRunGC (references: _maybeRunGC,
maybeRunGC, this._lastGCTime, timeSinceCompaction, collectGCMetrics,
shouldRunGC).
In `@src/domain/warp/sync.methods.js`:
- Around line 249-276: After applying the response and updating
this._cachedState (result.state) and merging response.patches into
this._lastFrontier in applySyncResponse, clear the dirty flag by setting
this._stateDirty = false before returning; locate applySyncResponse (which calls
applySyncResponseImpl and updates _cachedState, _lastFrontier and
_patchesSinceGC) and add the single assignment this._stateDirty = false
immediately after the frontier-merge block so status() will report "fresh".
---
Duplicate comments:
In `@src/domain/warp/_wiredMethods.d.ts`:
- Around line 15-99: The previously reported signature mismatches have been
resolved—no code changes required; confirm the declarations for translationCost,
_loadPatchesBySha, createWormhole, patch (the build callback) and
_setMaterializedState in the WarpGraph interface now match their implementations
and approve the PR as-is.
In `@src/domain/warp/materialize.methods.js`:
- Around line 48-52: This review is a duplicate approval notice referencing the
ceiling delegation now moved into the try/catch (the ceiling check and call to
this._materializeWithCeiling(ceiling, !!collectReceipts, t0)); remove the
duplicate review/comment to avoid noise and ensure no code changes are made—the
ceiling delegation must remain inside the try block and still call
_materializeWithCeiling as shown.
In `@src/domain/warp/subscribe.methods.js`:
- Around line 245-271: The iteration must use a shallow copy to avoid iterator
corruption when a subscriber unsubscribes during onChange; in _notifySubscribers
ensure you iterate over [...this._subscribers] (not this._subscribers directly)
and preserve the existing logic around pendingReplay, diffStates, and
isEmptyDiff so unsubscribe calls inside subscriber.onChange won't affect the
loop; if any other loop over this._subscribers exists, replace it with a
shallow-copy iteration as well.
---
Nitpick comments:
In `@src/domain/warp/_wiredMethods.d.ts`:
- Line 48: Update the _relationToCheckpointHead declaration to reflect the exact
string-literal union returned by its implementation: change the return type from
Promise<string> to Promise<'same' | 'ahead' | 'behind' | 'diverged'> so callers
get precise type safety for the possible results of _relationToCheckpointHead.
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 147-152: The current _loadLatestCheckpoint implementation swallows
all errors from loadCheckpoint and returns null; modify the catch to capture the
error (e.g. catch (err)) and log diagnostic information before returning null so
corrupted/IO/permission issues are visible. Locate the call to
loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }) inside
_loadLatestCheckpoint and replace the bare catch with one that logs the error
(use this._logger.error(...) if a logger exists, otherwise console.error with a
message including checkpointSha and the error) and then return null.
In `@src/domain/warp/fork.methods.js`:
- Around line 105-107: The fork name generation uses Date.now() which is
inconsistent with the injected clock used elsewhere; change the resolvedForkName
computation to use this._clock.now() instead of Date.now() (keep the fallback
behaviour using forkName and preserve the random suffix generation) so that
resolvedForkName, forkName and code paths that reference this._clock.now()
remain deterministic in tests.
In `@src/domain/warp/materialize.methods.js`:
- Around line 182-188: _materializeGraph currently calls _setMaterializedState
redundantly because materialize() already sets it; update _materializeGraph so
it only calls _setMaterializedState when there is an existing materialized graph
and the stored state differs. In practice, change the condition around the call
to _setMaterializedState to require this._materializedGraph to be truthy (e.g.
if (this._materializedGraph && this._materializedGraph.state !== state) { await
this._setMaterializedState(state); }) so the initial invocation (when
this._materializedGraph is null) does not recompute hashes/adjacencies. Ensure
you reference the same types used today for the state argument.
- Around line 152-161: The code always calls cloneStateV5(state) and assigns to
this._lastNotifiedState on every materialize(), causing unnecessary allocations
when there are no subscribers; change materialize() to only update
this._lastNotifiedState (via cloneStateV5) when there are subscribers or when
you need a non-null baseline (i.e., guard the clone behind the same subscriber
check used for notification: this._subscribers.length > 0 or hasPendingReplay),
relying on diffStates accepting null as the before argument when no baseline is
kept. Ensure you reference and update the same symbols: materialize,
this._subscribers, hasPendingReplay usage, diffStates, cloneStateV5, and
this._lastNotifiedState.
In `@src/domain/warp/patch.methods.js`:
- Around line 486-493: The comment above the loop is inaccurate: the loop
increments propsChanged for both new properties and changed values; update the
comment to reflect this (e.g., "Count properties that are new or have changed
values") or rename the variable propsChanged to propsAddedOrChanged and update
all references in this function (search for propsChanged, mergedState.prop, and
this._cachedState.prop) so the name matches the behavior.
In `@src/domain/warp/subscribe.methods.js`:
- Around line 193-232: The polling interval created in the watch flow
(pollIntervalId with setInterval) can outlive the WarpGraph instance and is
never cleared unless unsubscribe() is called, leaking resources; modify the
implementation to ensure the timer is cleared when the graph is disposed or
collected—either add a clear method invocation from the WarpGraph disposal
routine (call clearInterval(pollIntervalId) and set pollIntervalId = null inside
the graph's dispose/destroy method) and/or register a
FinalizationRegistry/WeakRef tied to this to automatically clearInterval when
the WarpGraph instance is GC'd; make sure to keep the existing unsubscribe()
behavior (clearing pollIntervalId and calling subscription.unsubscribe()) and to
reference pollIntervalId, pollInFlight, hasFrontierChanged, materialize, and
unsubscribe in your changes.
In `@src/domain/warp/sync.methods.js`:
- Around line 90-103: getFrontier currently awaits this._persistence.readRef
inside a for-loop causing N sequential I/O calls; change it to parallelize reads
by mapping writerIds (from discoverWriters) to promises that call
buildWriterRef(this._graphName, writerId) and
this._persistence.readRef(writerRef), await Promise.all on that array of
promises, then iterate the resulting tipSha values and call
updateFrontier(frontier, writerId, tipSha) for non-empty results; keep
createFrontier() and the final return as-is.
- Around line 364-461: The attempt-level timing in executeAttempt uses
Date.now() for attemptStart and for computing durationMs, which mixes real time
with the injected test clock used elsewhere (this._clock.now()); replace
Date.now() usages in executeAttempt (attemptStart and durationMs computation)
with this._clock.now() so all timing uses the injected clock (refer to
executeAttempt, attemptStart, durationMs, and this._clock).
- B41: auto-generate .d.ts from JSDoc to prevent declaration drift - B42: CI .d.ts signature validation hook - B43: explicit vitest excludes for runtime-specific test suites - B44: subscriber unsubscribe-during-callback e2e test - B45: type-level .d.ts conformance test
Release Preflight
If you tag this commit as |
- Replace allPatches.push(...patches) with loop (stack safety) - Add _lastGCTime > 0 guard in _maybeRunGC (spurious first-run GC) - Clear _stateDirty in applySyncResponse (sync state freshness) - Fix createHighTombstoneState test helper to use Dot objects - Bump version to 11.2.1, update CHANGELOG
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/domain/warp/sync.methods.js (2)
155-165: Staleness check instatus()spreads the entire frontier into an array.Line 161 uses
[...frontier].every(...)which allocates an intermediate array of all frontier entries. For a lightweight status method documented as O(writers), this is fine for typical writer counts, butfor...ofwith an early-exit would be more consistent with the stated goal and avoid the allocation.♻️ Suggested simplification
- } else if (this._stateDirty || !this._lastFrontier || - frontier.size !== this._lastFrontier.size || - ![...frontier].every(([w, sha]) => /** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) === sha)) { - cachedState = 'stale'; + } else if (this._stateDirty || !this._lastFrontier || + frontier.size !== this._lastFrontier.size) { + cachedState = 'stale'; + } else { + let isStale = false; + for (const [w, sha] of frontier) { + if (this._lastFrontier.get(w) !== sha) { + isStale = true; + break; + } + } + cachedState = isStale ? 'stale' : 'fresh'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 155 - 165, The staleness check in status() currently uses [...frontier].every(...) which allocates an intermediate array; replace that expression with a for...of loop over frontier that compares each ([w, sha]) to this._lastFrontier.get(w) and sets cachedState = 'stale' then breaks on the first mismatch (otherwise continue to set 'fresh'), keeping the existing conditions (this._stateDirty, !this._lastFrontier, frontier.size mismatch) intact and using the same Map lookup semantics to avoid the allocation and allow early exit.
496-503:RetryExhaustedErrorcause is re-thrown, discarding retry metadata.Line 503 throws
err.cause(the last inner error), which loses theRetryExhaustedError.attemptscount and any other retry context. Callers catching the re-thrown error won't know retries were exhausted — they'll see a plainSyncErrororTimeoutError.If this is intentional (to keep the public API surface simple), consider documenting it. Otherwise, wrapping or preserving the retry context would help consumers distinguish "first-attempt failure" from "gave up after N retries."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 496 - 503, The current catch path for RetryExhaustedError discards retry metadata by throwing err.cause; instead preserve retry context by re-throwing the RetryExhaustedError itself (throw err) or wrap the inner cause while copying retry properties (e.g., create a new Error that sets .cause = cause and .attempts = err.attempts or attach attempts to cause) so callers can still read RetryExhaustedError.attempts; modify the handler around RetryExhaustedError (the block referencing RetryExhaustedError, err.attempts and onStatus) to either throw err or augment the thrown error with the attempts and original cause before throwing.src/domain/warp/checkpoint.methods.js (3)
29-85: Duplicated frontier-building pattern across multiple methods.
createCheckpoint(lines 36–46),syncCoverage(lines 109–116),_loadPatchesSince(lines 163), and alsogetFrontierinsync.methods.jsall repeat the same discover-writers → iterate → readRef → collect pattern. SincegetFrontier()already encapsulates this, consider reusing it increateCheckpointandsyncCoverageto reduce duplication.Also applies to: 99-130, 90-103, 162-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/checkpoint.methods.js` around lines 29 - 85, The frontier-building loop is duplicated; replace the manual discoverWriters→for-loop→readRef logic in createCheckpoint, syncCoverage, and _loadPatchesSince with a single call to the existing getFrontier() helper (use await this.getFrontier()). Use the returned frontier map for updateFrontier usage and derive parents as Object.values(frontier).filter(Boolean) (or equivalent) where the code currently pushes sha into parents. Remove the duplicated discovery/readRef loop and any local frontier initialization in those functions so they reuse getFrontier() consistently.
147-151: Silentcatchin_loadLatestCheckpointhides corruption or I/O errors.Returning
nullon any error (line 149–151) means callers can't distinguish "no checkpoint" from "checkpoint exists but is corrupted / unreadable." This could mask real issues during migration validation or incremental materialization.At minimum, logging the error when a logger is available would aid debugging.
♻️ Suggested improvement
try { return await loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }); - } catch { + } catch (err) { + if (this._logger) { + this._logger.warn('Failed to load checkpoint', { sha: checkpointSha, error: err }); + } return null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/checkpoint.methods.js` around lines 147 - 151, The current silent catch in _loadLatestCheckpoint around the call to loadCheckpoint(this._persistence, checkpointSha, { codec: this._codec }) hides real I/O or corruption errors; change the catch to capture the error (e.g., catch (err)) and, if a logger is available on the instance (this._logger or this.logger), call logger.error with a clear message and the error details (include checkpointSha and persistence/context), then return null; if no logger exists consider rethrowing non-ENOENT/not-found errors instead of swallowing them.
283-285: Silentcatchswallows errors without any observability.The empty
catchblock is intentional ("GC failure never breaks materialize"), but silently discarding the error makes debugging GC issues difficult. Consider logging atdebug/warnlevel when a logger is available.♻️ Suggested improvement
- } catch { - // GC failure never breaks materialize + } catch (err) { + // GC failure never breaks materialize + if (this._logger) { + this._logger.warn('_maybeRunGC failed silently', { error: err }); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/checkpoint.methods.js` around lines 283 - 285, The empty catch that swallows errors (the block with the comment "GC failure never breaks materialize") should capture the exception and log it instead of discarding it: change the catch to catch (err) and emit a debug/warn-level log (using the existing logger if available, e.g., logger.warn or this.logger.warn, falling back to console.debug/console.warn) with a clear message like "GC failure during materialize" plus the error details, but keep the behavior of not rethrowing so materialize still proceeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 367-371: executeAttempt is using Date.now() for attemptStart while
earlier t0 uses this._clock.now(), causing mixed clock sources; update
executeAttempt to use this._clock.now() instead of Date.now() and change any
other local duration calculations in this function (including the per-attempt
durationMs logic referenced around where durationMs is computed) to derive from
this._clock.now(); locate and replace occurrences inside the executeAttempt
closure and the nearby duration computation (around the durationMs / attempt
timing code) so the injected clock controls all timing and metrics consistently.
---
Duplicate comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 258-262: The timeSinceCompaction calculation in inputMetrics
should use the same guard logic as maybeRunGC(): ensure you compute
timeSinceCompaction as this._lastGCTime > 0 ? this._clock.now() -
this._lastGCTime : 0 (as done now), and verify any other usages (e.g., in
maybeRunGC(), checkpointSave, or telemetry code paths) consistently use the same
_lastGCTime > 0 guard with this._clock.now() and the same field names
(_patchesSinceGC, _lastGCTime, _clock) to avoid negative or misleading values.
- Around line 175-177: Remove the stray "[duplicate_comment]" review artifact
and any duplicate review text from the comment block; no code changes to the
stack-safe loop (for (const p of patches) { allPatches.push(p); }) are
needed—just delete the duplicate marker so the review/comment is clean and not
reported twice.
In `@src/domain/warp/sync.methods.js`:
- Line 276: Ensure the dirty flag is cleared only after the sync response has
been fully applied: in the sync handler (the method that processes the sync
response—e.g., the function that applies the incoming patch/response), keep or
move the statement this._stateDirty = false so it runs after all state
updates/merges are completed, not before; also remove the stale
duplicate_comment marker left in the diff.
---
Nitpick comments:
In `@src/domain/warp/checkpoint.methods.js`:
- Around line 29-85: The frontier-building loop is duplicated; replace the
manual discoverWriters→for-loop→readRef logic in createCheckpoint, syncCoverage,
and _loadPatchesSince with a single call to the existing getFrontier() helper
(use await this.getFrontier()). Use the returned frontier map for updateFrontier
usage and derive parents as Object.values(frontier).filter(Boolean) (or
equivalent) where the code currently pushes sha into parents. Remove the
duplicated discovery/readRef loop and any local frontier initialization in those
functions so they reuse getFrontier() consistently.
- Around line 147-151: The current silent catch in _loadLatestCheckpoint around
the call to loadCheckpoint(this._persistence, checkpointSha, { codec:
this._codec }) hides real I/O or corruption errors; change the catch to capture
the error (e.g., catch (err)) and, if a logger is available on the instance
(this._logger or this.logger), call logger.error with a clear message and the
error details (include checkpointSha and persistence/context), then return null;
if no logger exists consider rethrowing non-ENOENT/not-found errors instead of
swallowing them.
- Around line 283-285: The empty catch that swallows errors (the block with the
comment "GC failure never breaks materialize") should capture the exception and
log it instead of discarding it: change the catch to catch (err) and emit a
debug/warn-level log (using the existing logger if available, e.g., logger.warn
or this.logger.warn, falling back to console.debug/console.warn) with a clear
message like "GC failure during materialize" plus the error details, but keep
the behavior of not rethrowing so materialize still proceeds.
In `@src/domain/warp/sync.methods.js`:
- Around line 155-165: The staleness check in status() currently uses
[...frontier].every(...) which allocates an intermediate array; replace that
expression with a for...of loop over frontier that compares each ([w, sha]) to
this._lastFrontier.get(w) and sets cachedState = 'stale' then breaks on the
first mismatch (otherwise continue to set 'fresh'), keeping the existing
conditions (this._stateDirty, !this._lastFrontier, frontier.size mismatch)
intact and using the same Map lookup semantics to avoid the allocation and allow
early exit.
- Around line 496-503: The current catch path for RetryExhaustedError discards
retry metadata by throwing err.cause; instead preserve retry context by
re-throwing the RetryExhaustedError itself (throw err) or wrap the inner cause
while copying retry properties (e.g., create a new Error that sets .cause =
cause and .attempts = err.attempts or attach attempts to cause) so callers can
still read RetryExhaustedError.attempts; modify the handler around
RetryExhaustedError (the block referencing RetryExhaustedError, err.attempts and
onStatus) to either throw err or augment the thrown error with the attempts and
original cause before throwing.
Replace Date.now() with this._clock.now() for attemptStart and durationMs in executeAttempt closure, matching the outer t0 timing and enabling injected-clock testability.
Round 3 Feedback — All Issues ResolvedAlready addressed (CodeRabbit didn't mark ✅):
Newly fixed:
|
|
@coderabbitai review please |
|
✅ Actions performedReview triggered.
|
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/domain/warp/sync.methods.js (1)
449-456: Response shape validation doesn't inspect individual patch entries.The validation confirms
response.patchesis an array but doesn't verify that each element has the expected{ writerId, sha }shape before passing them toapplySyncResponse(which iterates patches on line 265). Malformed entries are handled gracefully (theif (writerId && sha)guard on line 266 skips them), so this isn't a correctness issue — just noting that stricter upfront validation here could surface protocol bugs earlier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 449 - 456, The validation for the sync response should be strengthened to ensure each entry in response.patches conforms to the expected shape before calling applySyncResponse: inside the existing validation block (where response.patches is checked) iterate response.patches and verify each patch is an object and has non-empty string properties writerId and sha (or whichever types your protocol requires); if any entry fails, throw a SyncError with code 'E_SYNC_PROTOCOL' and a clear message. Update the guard surrounding response.patches in the same function so malformed patch entries are rejected early rather than relying on applySyncResponse's per-entry guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 375-378: The direct-peer branch in sync method currently awaits
remote.processSyncRequest(request) without a timeout, so if isDirectPeer is true
the call can hang indefinitely; wrap the call to
remote.processSyncRequest(request) with the same timeout(timeoutMs, ...) utility
used for the HTTP path (preserving the emit('requestSent') and
emit('responseReceived') semantics and honoring existing signal/abort behavior)
so that processSyncRequest is rejected after timeoutMs; update the assignment to
response accordingly and reuse the timeoutMs and timeout() helper already
present in this module.
---
Nitpick comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 449-456: The validation for the sync response should be
strengthened to ensure each entry in response.patches conforms to the expected
shape before calling applySyncResponse: inside the existing validation block
(where response.patches is checked) iterate response.patches and verify each
patch is an object and has non-empty string properties writerId and sha (or
whichever types your protocol requires); if any entry fails, throw a SyncError
with code 'E_SYNC_PROTOCOL' and a clear message. Update the guard surrounding
response.patches in the same function so malformed patch entries are rejected
early rather than relying on applySyncResponse's per-entry guard.
| if (isDirectPeer) { | ||
| emit('requestSent'); | ||
| response = await remote.processSyncRequest(request); | ||
| emit('responseReceived'); |
There was a problem hiding this comment.
Direct-peer sync path has no timeout — could hang indefinitely.
When isDirectPeer is true, remote.processSyncRequest(request) is awaited without any timeout wrapper, unlike the HTTP path which is guarded by timeout(timeoutMs, ...). A misbehaving or slow peer would block the caller forever with no recourse (the outer signal is only checked before the call via checkAborted, not during).
Consider wrapping the direct-peer call with the same timeout() utility:
Suggested change
if (isDirectPeer) {
emit('requestSent');
- response = await remote.processSyncRequest(request);
+ response = await timeout(timeoutMs, () => remote.processSyncRequest(request));
emit('responseReceived');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (isDirectPeer) { | |
| emit('requestSent'); | |
| response = await remote.processSyncRequest(request); | |
| emit('responseReceived'); | |
| if (isDirectPeer) { | |
| emit('requestSent'); | |
| response = await timeout(timeoutMs, () => remote.processSyncRequest(request)); | |
| emit('responseReceived'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/warp/sync.methods.js` around lines 375 - 378, The direct-peer
branch in sync method currently awaits remote.processSyncRequest(request)
without a timeout, so if isDirectPeer is true the call can hang indefinitely;
wrap the call to remote.processSyncRequest(request) with the same
timeout(timeoutMs, ...) utility used for the HTTP path (preserving the
emit('requestSent') and emit('responseReceived') semantics and honoring existing
signal/abort behavior) so that processSyncRequest is rejected after timeoutMs;
update the assignment to response accordingly and reuse the timeoutMs and
timeout() helper already present in this module.
- B46: ESLint ban Date.now() in domain files - B47: orsetAdd dot argument validation
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/domain/warp/sync.methods.js (3)
466-509: Retry error unwrapping discards metadata.On line 498–503, when a
RetryExhaustedErroris caught, the code throwserr.cause(the last failure), discarding the retry metadata (attempt count, all intermediate errors). Callers who catch the re-thrown error lose visibility into the fact that retries were exhausted.Consider preserving the
RetryExhaustedErroras the thrown error (or at least as thecauseof the re-thrown error), so callers can distinguish "first attempt failed" from "all retries exhausted":Suggested change
if (err instanceof RetryExhaustedError) { const cause = /** `@type` {Error} */ (err.cause || err); if (typeof onStatus === 'function') { onStatus({ type: 'failed', attempt: err.attempts, error: cause }); } - throw cause; + cause.retryAttempts = err.attempts; + throw cause; }Or alternatively, wrap it:
- throw cause; + throw new SyncError(cause.message, { + code: cause.code || 'E_SYNC_RETRIES_EXHAUSTED', + context: { attempts: err.attempts }, + cause, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 466 - 509, The catch block currently unwraps RetryExhaustedError and throws err.cause, losing retry metadata; instead preserve the RetryExhaustedError (or wrap the cause and attach the original RetryExhaustedError as its cause) so callers can detect retries were exhausted. Concretely, in the catch branch that checks "if (err instanceof RetryExhaustedError)" (symbol: RetryExhaustedError) stop throwing "cause" directly — either rethrow "err" itself and pass its metadata to onStatus (use err.attempts), or create a new Error that includes the original cause but sets the RetryExhaustedError as the .cause property (so callers see both the final error and the retry exhaustion metadata); update the onStatus call to use err.attempts and ensure the thrown object preserves err (do not replace it with err.cause).
420-441: Unconsumed response body on HTTP error status codes.When
res.status >= 400or>= 500, the response body is never read or drained. In Node.js'sfetch(undici), unconsumed bodies can prevent connection reuse and may trigger resource warnings under load.Consider consuming the body before throwing:
Suggested fix
if (res.status >= 500) { + await res.body?.cancel(); throw new SyncError(`Remote error: ${res.status}`, { code: 'E_SYNC_REMOTE', context: { status: res.status }, }); } if (res.status >= 400) { + await res.body?.cancel(); throw new SyncError(`Protocol error: ${res.status}`, { code: 'E_SYNC_PROTOCOL', context: { status: res.status }, }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 420 - 441, The HTTP error handling currently throws SyncError for res.status >= 400 / >= 500 without reading the response body, which can leak resources in Node's fetch; before each throw (the blocks that reference res and throw new SyncError) consume/drain the body (e.g., await res.text() or similar inside a try/catch), include a short snippet of the drained body (or its length/truncated preview) in the SyncError context, and handle any errors from consuming the body gracefully so the original SyncError is still thrown; update the two relevant error branches that reference res.status and the subsequent JSON parsing site (where response = await res.json()) to ensure the body is consumed only once and errors are handled clearly.
155-165: Minor: spreading the entire frontier Map for the staleness check is wasteful.
![...frontier].every(...)allocates a temporary array of all frontier entries just to check equality. A simplefor...ofloop over the Map would allow early exit without the allocation.Suggested refactor
- } else if (this._stateDirty || !this._lastFrontier || - frontier.size !== this._lastFrontier.size || - ![...frontier].every(([w, sha]) => /** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) === sha)) { - cachedState = 'stale'; + } else if (this._stateDirty || !this._lastFrontier || + frontier.size !== this._lastFrontier.size || + (() => { for (const [w, sha] of frontier) { if (/** `@type` {Map<string, string>} */ (this._lastFrontier).get(w) !== sha) return true; } return false; })()) { + cachedState = 'stale';Or better yet, extract the comparison into a small helper for readability:
function frontiersEqual(a, b) { if (a.size !== b.size) return false; for (const [w, sha] of a) { if (b.get(w) !== sha) return false; } return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/warp/sync.methods.js` around lines 155 - 165, The staleness check currently spreads frontier into an array for every(...) which allocates unnecessarily; replace the spread/every expression in the cachedState calculation (the condition using frontier and this._lastFrontier) with a non-allocating comparison — either iterate with a for...of loop over frontier and early-return false on mismatch, or extract a helper like frontiersEqual(a,b) that checks sizes and then for...of a to compare b.get(key) to value; update the cachedState branch that uses this._lastFrontier, frontier, this._stateDirty and this._cachedState to call that helper or the loop instead of using [...frontier].every(...).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 249-279: The code in applySyncResponse redundantly merges
response.patches into this._lastFrontier after applySyncResponseImpl has already
produced an updated frontier; replace the manual loop that iterates
response.patches with assigning the computed frontier from the impl: set
this._lastFrontier = result.frontier (ensuring types/shape match, e.g., a Map)
so _lastFrontier is updated atomically and you no longer iterate
response.patches; keep other updates (this._cachedState, _patchesSinceGC,
_stateDirty) intact.
---
Duplicate comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 375-378: The direct-peer branch (isDirectPeer) calls
remote.processSyncRequest without any timeout, so a slow/misbehaving peer can
hang the caller; wrap the call to remote.processSyncRequest in a timeout guard
(e.g., Promise.race with a configurable timeout or an AbortSignal-aware call if
remote supports cancellation) and throw a clear timeout error when expired so
callers don't wait forever; ensure emit('requestSent') still occurs before the
call and emit('responseReceived') (or an appropriate timeout/failed event) is
emitted only after a definitive outcome, and keep shouldRetry behavior unchanged
for direct peers.
---
Nitpick comments:
In `@src/domain/warp/sync.methods.js`:
- Around line 466-509: The catch block currently unwraps RetryExhaustedError and
throws err.cause, losing retry metadata; instead preserve the
RetryExhaustedError (or wrap the cause and attach the original
RetryExhaustedError as its cause) so callers can detect retries were exhausted.
Concretely, in the catch branch that checks "if (err instanceof
RetryExhaustedError)" (symbol: RetryExhaustedError) stop throwing "cause"
directly — either rethrow "err" itself and pass its metadata to onStatus (use
err.attempts), or create a new Error that includes the original cause but sets
the RetryExhaustedError as the .cause property (so callers see both the final
error and the retry exhaustion metadata); update the onStatus call to use
err.attempts and ensure the thrown object preserves err (do not replace it with
err.cause).
- Around line 420-441: The HTTP error handling currently throws SyncError for
res.status >= 400 / >= 500 without reading the response body, which can leak
resources in Node's fetch; before each throw (the blocks that reference res and
throw new SyncError) consume/drain the body (e.g., await res.text() or similar
inside a try/catch), include a short snippet of the drained body (or its
length/truncated preview) in the SyncError context, and handle any errors from
consuming the body gracefully so the original SyncError is still thrown; update
the two relevant error branches that reference res.status and the subsequent
JSON parsing site (where response = await res.json()) to ensure the body is
consumed only once and errors are handled clearly.
- Around line 155-165: The staleness check currently spreads frontier into an
array for every(...) which allocates unnecessarily; replace the spread/every
expression in the cachedState calculation (the condition using frontier and
this._lastFrontier) with a non-allocating comparison — either iterate with a
for...of loop over frontier and early-return false on mismatch, or extract a
helper like frontiersEqual(a,b) that checks sizes and then for...of a to compare
b.get(key) to value; update the cachedState branch that uses this._lastFrontier,
frontier, this._stateDirty and this._cachedState to call that helper or the loop
instead of using [...frontier].every(...).
| export function applySyncResponse(response) { | ||
| if (!this._cachedState) { | ||
| throw new QueryError(E_NO_STATE_MSG, { | ||
| code: 'E_NO_STATE', | ||
| }); | ||
| } | ||
|
|
||
| const currentFrontier = /** @type {any} */ (this._cachedState.observedFrontier); // TODO(ts-cleanup): narrow port type | ||
| const result = /** @type {{state: import('../services/JoinReducer.js').WarpStateV5, frontier: Map<string, string>, applied: number}} */ (applySyncResponseImpl(response, this._cachedState, currentFrontier)); | ||
|
|
||
| // Update cached state | ||
| this._cachedState = result.state; | ||
|
|
||
| // Keep _lastFrontier in sync so hasFrontierChanged() won't misreport stale. | ||
| // Merge the response's per-writer tips into the stored frontier snapshot. | ||
| if (this._lastFrontier && Array.isArray(response.patches)) { | ||
| for (const { writerId, sha } of response.patches) { | ||
| if (writerId && sha) { | ||
| this._lastFrontier.set(writerId, sha); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Track patches for GC | ||
| this._patchesSinceGC += result.applied; | ||
|
|
||
| // State is now in sync with the frontier — clear dirty flag | ||
| this._stateDirty = false; | ||
|
|
||
| return result; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the applySyncResponseImpl function definition
fd -n "*.js" | xargs grep -l "applySyncResponseImpl"Repository: git-stunts/git-warp
Length of output: 295
🏁 Script executed:
# Let's look at the structure and find applySyncResponseImpl definition
rg -n "export.*applySyncResponseImpl|function applySyncResponseImpl" --type=js -B 2 -A 15Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Check the applySyncResponse function more thoroughly to understand usage patterns
rg -n "applySyncResponse" --type=js -B 2 -A 10Repository: git-stunts/git-warp
Length of output: 29947
🏁 Script executed:
# Let's look at result.frontier usage patterns
rg -n "result\.frontier|\.frontier" --type=js | head -50Repository: git-stunts/git-warp
Length of output: 5779
🏁 Script executed:
# Find the applySyncResponseImpl implementation details
rg -n "export function applySyncResponse\|export const applySyncResponse" src/domain/services/SyncProtocol.js -A 50 | head -80Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Look at hasFrontierChanged to understand _lastFrontier usage
rg -n "hasFrontierChanged\|_lastFrontier" src/domain/warp/sync.methods.js -B 2 -A 8Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# See how _lastFrontier is initialized and used
rg -n "_lastFrontier" src/domain/warp/ -B 1 -A 1Repository: git-stunts/git-warp
Length of output: 2403
🏁 Script executed:
# Get the full applySyncResponseImpl function
rg -n "export function applySyncResponse" src/domain/services/SyncProtocol.js -A 60 | head -100Repository: git-stunts/git-warp
Length of output: 2673
🏁 Script executed:
# Check how result.frontier is actually populated in applySyncResponseImpl
rg -n "newFrontier\|return.*frontier" src/domain/services/SyncProtocol.js | head -30Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Look at what currentFrontier is - the observedFrontier from cachedState
rg -n "observedFrontier" src/domain/warp/ -B 1 -A 1Repository: git-stunts/git-warp
Length of output: 1523
🏁 Script executed:
# Check if result.frontier is used anywhere else in the codebase
rg -n "result\.frontier" src/domain/warp/ src/domain/services/Repository: git-stunts/git-warp
Length of output: 140
🏁 Script executed:
# Let's understand the difference between _cachedState.observedFrontier and _lastFrontier
rg -n "observedFrontier\|_lastFrontier" src/domain/warp/materialize.methods.js -B 2 -A 2Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Check the complete applySyncResponse function in SyncProtocol to understand frontier updates
cat -n src/domain/services/SyncProtocol.js | sed -n '467,510p'Repository: git-stunts/git-warp
Length of output: 2048
🏁 Script executed:
# Look at how currentFrontier and _lastFrontier relate - are they ever different?
rg -n "getFrontier\(\)" src/domain/warp/ -B 2 -A 5Repository: git-stunts/git-warp
Length of output: 5332
🏁 Script executed:
# Check if result.frontier is assigned to _cachedState.observedFrontier anywhere
rg -n "observedFrontier.*=" src/domain/warp/ -B 1 -A 1Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Check if _cachedState.observedFrontier is ever accessed after applySyncResponse
rg -n "_cachedState\.observedFrontier" src/domain/warp/ -B 2 -A 2Repository: git-stunts/git-warp
Length of output: 1376
🏁 Script executed:
# Look at the full applySyncResponse function in the wrapper to see the complete picture
cat -n src/domain/warp/sync.methods.js | sed -n '249,280p'Repository: git-stunts/git-warp
Length of output: 1386
🏁 Script executed:
# Check if result.frontier should actually be used to replace _lastFrontier
# Let's look at the return value to confirm result has frontier
cat -n src/domain/services/SyncProtocol.js | sed -n '505,510p'Repository: git-stunts/git-warp
Length of output: 183
🏁 Script executed:
# Check the updateFrontier function to understand what happens when frontier is updated
rg -n "export.*updateFrontier\|function updateFrontier" src/domain/services/Frontier.js -A 10Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Verify the concern: does the manual merge achieve the same result as using result.frontier?
rg -n "cloneFrontier" src/domain/services/Frontier.js -A 8Repository: git-stunts/git-warp
Length of output: 389
🏁 Script executed:
# Check if _cachedState.observedFrontier needs to be updated with result.frontier
# Look at how observedFrontier is used after applySyncResponse
rg -n "_cachedState" src/domain/warp/sync.methods.js | grep -A 2 -B 2 "applySyncResponse"Repository: git-stunts/git-warp
Length of output: 506
🏁 Script executed:
# Check cloneStateV5 to see if observedFrontier is copied or referenced
rg -n "export.*cloneStateV5\|function cloneStateV5" src/domain/services/JoinReducer.js -A 15Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Let's trace if observedFrontier in _cachedState actually gets updated elsewhere after sync
rg -n "observedFrontier\s*=" src/domain/warp/ -B 2 -A 2Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Let's see if result.frontier and the manual merge produce the same result
# Check what happens when we look at the frontier returned vs what gets merged
cat -n src/domain/warp/sync.methods.js | sed -n '256,278p'Repository: git-stunts/git-warp
Length of output: 1164
🏁 Script executed:
# Search for tests that verify the frontier is updated correctly after applySyncResponse
rg -n "applySyncResponse" test/unit/domain/WarpGraph.status.test.js -B 3 -A 10Repository: git-stunts/git-warp
Length of output: 2507
🏁 Script executed:
# Final verification: does _cachedState.observedFrontier actually get updated with result.frontier anywhere?
rg -n "_cachedState\.observedFrontier\s*=" src/domain/warp/ -B 2 -A 2Repository: git-stunts/git-warp
Length of output: 45
🏁 Script executed:
# Check if result.frontier should have been assigned to _cachedState.observedFrontier
# Look at how the state flows through applySyncResponseImpl
cat -n src/domain/services/SyncProtocol.js | sed -n '467,475p'Repository: git-stunts/git-warp
Length of output: 503
🏁 Script executed:
# Check how _lastFrontier is initialized and what it represents
rg -n "_lastFrontier\s*=" src/domain/warp/ -B 2 -A 1Repository: git-stunts/git-warp
Length of output: 647
🏁 Script executed:
# Check if there are any comments or tests explaining why manual merge is used
rg -n "manual.*merge|_lastFrontier.*frontier|response\.patches.*frontier" src/domain/warp/ test/ --type=js -B 1 -A 1Repository: git-stunts/git-warp
Length of output: 45
Use result.frontier instead of manually merging patches into _lastFrontier.
The result from applySyncResponseImpl includes a computed frontier property that already reflects the updated frontier after applying patches (lines 499–501 in SyncProtocol.js). The current code (lines 264–270) redundantly processes response.patches again to manually update _lastFrontier. Simplifying to this._lastFrontier = result.frontier would be clearer and automatically handle format changes without requiring manual patch iteration logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/warp/sync.methods.js` around lines 249 - 279, The code in
applySyncResponse redundantly merges response.patches into this._lastFrontier
after applySyncResponseImpl has already produced an updated frontier; replace
the manual loop that iterates response.patches with assigning the computed
frontier from the impl: set this._lastFrontier = result.frontier (ensuring
types/shape match, e.g., a Map) so _lastFrontier is updated atomically and you
no longer iterate response.patches; keep other updates (this._cachedState,
_patchesSinceGC, _stateDirty) intact.
Summary
WarpGraph.jsfrom 3260 lines to 416 lines (-87%) by extracting methods into 9 focused modules insrc/domain/warp/wireWarpMethods()infrastructure that wires extracted methods ontoWarpGraph.prototypeat import time, preserving the existing public API surface_wiredMethods.d.ts) sotsc --noEmitsees the wired methodsMethod modules
query.methods.jspatch.methods.jssync.methods.jscheckpoint.methods.jsfork.methods.jsmaterializeAdvanced.methods.jssubscribe.methods.jsprovenance.methods.jsmaterialize.methods.jsTest plan
tsc --noEmitpasses with 0 errorsSummary by CodeRabbit