Skip to content

feat: add poll comments and batched vote caching#271

Merged
spe1020 merged 3 commits intomainfrom
feat/poll-comments-vote-cache
Mar 26, 2026
Merged

feat: add poll comments and batched vote caching#271
spe1020 merged 3 commits intomainfrom
feat/poll-comments-vote-cache

Conversation

@spe1020
Copy link
Copy Markdown
Contributor

@spe1020 spe1020 commented Mar 26, 2026

Summary

  • Poll comments: Added FeedComments component to the polls feed page so users can comment below each poll. Enabled the comment button in NoteActionBar.
  • Batched vote cache (voteCache.ts): Replaces per-poll NDK subscriptions with a shared cache that microtask-batches requests, fetches from Primal caching servers first for speed, then fills gaps via a single relay subscription, and keeps one live subscription for real-time vote updates.
  • Primal cache vote support: Extended PrimalCacheService to handle kind:1018 vote events and added fetchVoteEvents for batch-fetching votes across multiple polls in one request.

Before: N polls on screen = N persistent subscriptions, each recounting all votes on every event.
After: N polls = 1 Primal cache request + 1 relay request + 1 live subscription. Votes deduped in-memory, recounted only on new unique events. Ref-counted cleanup with 60s delay prevents thrashing on navigation.

Test plan

  • Navigate to /polls — polls load with correct vote tallies
  • Vote on a poll — tally updates immediately (optimistic local vote)
  • Click the comment icon on a poll — comment section expands
  • Post a comment on a poll — comment appears in the list
  • Navigate away from polls page and back — votes reload from cache efficiently
  • Verify PollDisplay in the main feed (FoodstrFeedOptimized) still shows correct votes

🤖 Generated with Claude Code

Add comment support to the polls feed page and replace per-poll vote
subscriptions with an efficient batched vote cache that uses Primal
caching servers for fast initial load and a single live subscription
for real-time updates.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Mar 26, 2026

Deploying frontend with  Cloudflare Pages  Cloudflare Pages

Latest commit: 39b043a
Status:⚡️  Build in progress...

View logs

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Mar 26, 2026

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
❌ Deployment failed
View logs
frontend 39b043a Mar 26 2026, 01:07 AM

@spe1020 spe1020 requested a review from Copilot March 26, 2026 00:47
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds poll commenting UI to the /polls feed and introduces a shared, batched vote-fetching/caching layer to reduce per-poll subscriptions and recount work, including Primal-cache support for kind:1018 vote events.

Changes:

  • Added a new /polls page that renders polls with actions and an inline comments section.
  • Introduced voteCache.ts to microtask-batch vote fetches, use Primal as a fast path, backfill via a single relay query, and maintain one live subscription for updates.
  • Extended PrimalCacheService to collect kind:1018 events and added batch vote fetching helpers.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 8 comments.

File Description
src/routes/polls/+page.svelte New polls feed page rendering polls with actions + FeedComments.
src/lib/voteCache.ts New shared batched vote cache + single live subscription mechanism.
src/lib/primalCache.ts Adds kind:1018 handling and new Primal cache fetch helpers for polls/votes.
src/components/PollDisplay.svelte Switches vote counting from per-component subscription to voteCache + optimistic local vote injection.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

<script lang="ts">
import { onMount, onDestroy } from 'svelte';
import { ndk, userPublickey, ndkConnected, ensureNdkConnected } from '$lib/nostr';
import { NDKRelaySet } from '@nostr-dev-kit/ndk';
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

userPublickey and ndkConnected are imported but never referenced in this component. This can fail lint/CI in this repo; remove the unused imports (or use the $userPublickey/$ndkConnected stores if they were intended).

Suggested change
import { NDKRelaySet } from '@nostr-dev-kit/ndk';

Copilot uses AI. Check for mistakes.

import { writable, type Readable } from 'svelte/store';
import { get } from 'svelte/store';
import { NDKEvent } from '@nostr-dev-kit/ndk';
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NDKEvent is imported as a runtime value, but this module only uses it as a type. Import it with import type (or remove it if not needed) to avoid unused import warnings and unnecessary bundling.

Suggested change
import { NDKEvent } from '@nostr-dev-kit/ndk';
import type { NDKEvent } from '@nostr-dev-kit/ndk';

Copilot uses AI. Check for mistakes.
Comment on lines +181 to +207
const idsToFetch = [...pollIds].filter((id) => !fetchedPolls.has(id));
if (idsToFetch.length === 0) return;

// Ensure empty maps exist before events arrive
for (const id of idsToFetch) {
if (!voteEventsByPoll.has(id)) {
voteEventsByPoll.set(id, new Map());
}
}

// Phase 1: Primal cache (fast — typically <1s)
try {
const events = await fetchVoteEventsFromPrimal(ndkInstance, idsToFetch);
for (const event of events) {
routeVoteEvent(event);
}
} catch (err) {
console.debug('[VoteCache] Primal cache fetch skipped:', err);
}

// Phase 2: Relay fetch for completeness (single subscription for all polls)
try {
await new Promise<void>((resolve) => {
const sub = ndkInstance.subscribe(
{ kinds: [1018 as number], '#e': idsToFetch },
{ closeOnEose: true }
);
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

executeBatch can end up querying a very large number of poll IDs at once (e.g., /polls requests up to 200 polls). Passing an unbounded #e array into a single relay subscription and a single Primal request risks relay rejections / oversized requests. Consider chunking idsToFetch into smaller batches (similar to src/lib/stores/groupZapReceipts.ts which caps at 100 IDs per sub) and merging results into the same per-poll caches.

Copilot uses AI. Check for mistakes.
Comment on lines +209 to +215
sub.on('event', (e: NDKEvent) => routeVoteEvent(e));
sub.on('eose', () => resolve());

// Safety timeout
setTimeout(() => {
try { sub.stop(); } catch {}
resolve();
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relay fetch Promise sets a safety setTimeout, but the timer is never cleared on eose. For large batches this can accumulate timers and will also call sub.stop() after the subscription already closed. Store the timeout handle and clearTimeout it in the eose handler (and/or when resolving) to avoid stray timers.

Suggested change
sub.on('event', (e: NDKEvent) => routeVoteEvent(e));
sub.on('eose', () => resolve());
// Safety timeout
setTimeout(() => {
try { sub.stop(); } catch {}
resolve();
let settled = false;
let timeoutId: ReturnType<typeof setTimeout> | null = null;
const finish = () => {
if (settled) return;
settled = true;
if (timeoutId !== null) {
clearTimeout(timeoutId);
timeoutId = null;
}
resolve();
};
sub.on('event', (e: NDKEvent) => routeVoteEvent(e));
sub.on('eose', () => {
finish();
});
// Safety timeout
timeoutId = setTimeout(() => {
try {
sub.stop();
} catch {}
finish();

Copilot uses AI. Check for mistakes.
Comment on lines +218 to +228
} catch (err) {
console.debug('[VoteCache] Relay fetch failed:', err);
}

// Mark as fetched so re-mounts don't re-fetch
for (const id of idsToFetch) {
fetchedPolls.add(id);
}

// Phase 3: Start/update the single live subscription
refreshLiveSubscription();
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetchedPolls is marked for all idsToFetch even if the relay fetch times out or fails. In those cases, future mounts won’t retry the backfill, and the live subscription uses since: now, so historical votes may never be loaded. Consider only marking a poll as fetched after a successful completion/backfill, or track partial status and allow retry on failures/timeouts.

Copilot uses AI. Check for mistakes.
events: [],
follows: [],
userStats: null,
type: 'articles'
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fetchPolls, the pending request is stored with type: 'articles'. It works today because handleMessage resolves 'articles' like a feed response, but it’s misleading and easy to break in future refactors. Consider using type: 'feed' (or introducing a dedicated 'polls' type in PendingRequest) so the request purpose matches the type.

Suggested change
type: 'articles'
type: 'feed'

Copilot uses AI. Check for mistakes.
Comment on lines +633 to +638
const requestId = this.generateRequestId();
const request = ['REQ', requestId, {
kinds: [1018],
'#e': pollIds,
limit: pollIds.length * 100
}];
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetchVoteEvents sets limit: pollIds.length * 100. With a large poll batch (e.g., up to 200 polls on /polls), this can become a 20k-event request which may be slow or rejected by the cache server. Consider capping the limit and/or chunking pollIds into multiple requests (then merging results) to keep request size predictable.

Copilot uses AI. Check for mistakes.
Comment on lines +28 to +31
const oneHourAgo = (event.created_at || 0) - 3600;
const recentByAuthor = existing.filter(
(e) => e.pubkey === event.pubkey && (e.created_at || 0) > oneHourAgo
);
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exceedsRateLimit computes oneHourAgo relative to the incoming event’s created_at and then counts existing polls newer than that timestamp, without bounding the upper end to event.created_at. If events arrive out of order (common with multi-relay EOSE), newer polls will cause older polls from different hours to be incorrectly filtered. Consider comparing against a fixed window (e.g., last hour from Date.now()), or constrain the window to [event.created_at - 3600, event.created_at] (or bucket by hour) so only polls from the same hour are counted.

Suggested change
const oneHourAgo = (event.created_at || 0) - 3600;
const recentByAuthor = existing.filter(
(e) => e.pubkey === event.pubkey && (e.created_at || 0) > oneHourAgo
);
const createdAt = event.created_at || 0;
const windowStart = createdAt - 3600;
const windowEnd = createdAt;
const recentByAuthor = existing.filter((e) => {
const eCreatedAt = e.created_at || 0;
return (
e.pubkey === event.pubkey &&
eCreatedAt > windowStart &&
eCreatedAt <= windowEnd
);
});

Copilot uses AI. Check for mistakes.
- Remove unused imports (userPublickey, ndkConnected) from polls page
- Use `import type` for NDKEvent in voteCache (type-only usage)
- Chunk large vote batches (max 50 IDs per request) to avoid relay
  rejections and oversized Primal requests
- Fix timer leak in relay fetch: clear timeout on eose, use settled
  flag to prevent double-resolve
- Only mark polls as fetched after successful relay completion (not
  on timeout) so future mounts can retry the backfill
- Cap fetchVoteEvents limit to 5000 to keep request size predictable
- Fix fetchPolls pending request type from 'articles' to 'feed'
- Fix rate limit window to bound upper end at event.created_at so
  out-of-order events don't incorrectly filter polls from other hours

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@spe1020 spe1020 merged commit a0f6db0 into main Mar 26, 2026
1 of 3 checks passed
@spe1020 spe1020 deleted the feat/poll-comments-vote-cache branch March 26, 2026 01:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants