Deliver a practical-mode VEIL implementation in this Rust workspace that can:
- build/encrypt objects, 2) erasure-code into shards, 3) forward across multi-lane transports, 4) reconstruct/decrypt at subscribers, and 5) maintain high shard coverage with rarity-biased caching.
- Profiles:
SMALL(k=6,n=10)andLARGE(k=10,n=16) - Buckets:
16 KiB,32 KiB,64 KiB - Limits:
MAX_OBJECT_SIZE=256 KiB,TARGET_BATCH_SIZE=96 KiB - Epoch mode:
EPOCH_SECONDS=86400(24h) - Cache TTL:
90 minutes(or equivalent simulation steps)
veil-core: tags, hashes, fixed types (Tag,ObjectRoot,ShardId), and error modelveil-codec: canonicalObjectV1+ShardV1encode/decode and validationveil-crypto: AEAD encrypt/decrypt and optional signaturesveil-fec: profile selection, RS sharding/reconstruction, shard sizingveil-node: subscription filter, dedupe/cache, inbox/reconstruction, ACK flowveil-transport: lane abstraction and multi-lane send policyveil-sim: packet loss/latency scenarios and cache-pressure behaviorpolicy(inveil-nodeinitially): local WoT-based forwarding/cache/UI prioritization hooks
- Feasible with low risk: current architecture already keeps protocol semantics in
veil-core/veil-codecand policy/runtime inveil-node. - Low-impact path: evolve
veil-transportinto a byte-blob adapter contract without changing shard/object wire formats. - No protocol changes required: lane identity remains local policy and is not encoded in shard headers.
- Incremental rollout: implement adapter + runtime loop behind new APIs, then migrate existing paths without breaking tests/examples.
- Feasible with low risk: WoT is a local prioritization layer; it does not require protocol, shard header, or transport changes.
- Minimum-impact path: add policy hooks in
veil-node(classify,quota,budget,eviction_priority) with safe defaults that preserve current behavior. - No global trust requirement: v1 uses local follow/mute/block plus bounded endorsements (depth <= 2, thresholded).
- Pipeline invariance: WoT influences ordering and quotas only; object validity, reconstruction, and delivery logic remain unchanged.
- Introduce an adapter trait focused on opaque bytes:
send(peer, bytes)recv() -> (peer, bytes)- opaque peer handle for replies
- optional
max_payload_hint()
- Keep existing lane interfaces as compatibility wrappers during migration.
- Exit criteria: mock adapter tests prove lossy/unordered delivery is tolerated.
- Add a transport-driven ingest loop that reads from one or more adapters and routes bytes into shard processing.
- Keep current shard pipeline unchanged: dedupe/cache -> subscription gate -> forward -> reconstruct -> decrypt -> app callback.
- Accept inbound-only or outbound-only adapters.
- Exit criteria: node receives from adapter and delivers decrypted payload in integration tests.
- Implement lane selection as local policy only (fast lane + fallback lane), with no header/schema impact.
- Use coarse transport capabilities (payload hint) to choose shard/bucket send strategy.
- Exit criteria: sim run shows delivery success under partial lane failure.
- Finalize eviction behavior: drop expired first, then evict most common by local observations.
- Preserve local signals for future WoT/payment weighting without affecting validity rules.
- Exit criteria: under constrained cache, rare shards have longer residency than common shards.
- Add local trust tiers:
Trusted,Known,Unknown,Muted,Blocked. - Add policy interface for:
classify_publisher(pubkey) -> tierforwarding_quota(tier) -> fractionstorage_budget(tier) -> max_shardseviction_priority(meta) -> score
- Default v1 policy:
- explicit follows ->
Trusted, blocks ->Blocked, mutes ->Muted Knownvia >=2 trusted endorsers, max depth 2, strong decay- forwarding budget 70/25/5 for Trusted/Known/Unknown (Muted ~0, Blocked 0)
- explicit follows ->
- Exit criteria: policy toggles change forwarding/cache priorities without changing validation results.
- Freeze v0.1 fields/flags and tag derivations (
feed_tag,rv_tag) - Publish deterministic vectors for tags, object headers, and shard headers
- Exit criteria: vectors pass in CI across all relevant crates
- Implement batching (
TARGET_BATCH_SIZE) and fast interactive flush - Implement object build: encrypt, optional sign, padding to bucket-friendly sizes
- Exit criteria: object round-trip tests plus signature/AEAD negative tests
- Implement profile/bucket selection and systematic Reed-Solomon split
- Generate
shard_id = H(shard_bytes)and enforce dedupe semantics - Exit criteria: property tests reconstruct from any
kunique shard indices
- Enforce subscription-based forwarding by
tag - Implement TTL cache and rarity-biased eviction using local replica heuristics
- Add WoT-aware prioritization hooks (tiered quotas/caps) behind default-compatible policy
- Exit criteria: under pressure, rare shards survive longer than common shards
- Lane A sends
k+2shards to two peers; Lane B sends fallback shards - Add escalation on ACK timeout with backoff and bounded retries
- Land transport-adapter runtime loop for inbound/outbound byte payloads
- Exit criteria: delivery succeeds in degraded-lane simulation scenarios
- Add fuzzing for codec/parser boundaries
- Add end-to-end example (
object -> shards -> forward -> reconstruct -> ACK) - Exit criteria:
cargo fmt,clippy -D warnings, andcargo test --workspaceall green
- Define local RPC schema (requests/events) with versioning.
- Choose IPC transport (localhost HTTP+WS first).
- Do not use the SDK in android-node; UI must talk directly to the node API.
- Define identity/storage boundaries (node owns keys, cache, queue).
- Define observability contract (lane health, queue depth, shard stats, errors).
- Exit criteria: RPC spec doc + stub client in app.
- Android foreground service wrapper for the node.
- Start/stop lifecycle and persistent notification.
- Authenticated localhost RPC endpoint.
- Exit criteria: UI can connect and read node status.
- Node manages QUIC/WS/Tor lanes and exposes health.
- Node-managed identity creation/persistence/rotation.
- UI shows identity and lane status.
- Exit criteria: UI can display live lane health + identity.
- Node owns publish queue with offline buffering and retries.
- UI submits payloads and receives status updates.
- Exit criteria: UI send works offline and drains on reconnect.
- Node stores shards, reconstructs objects, validates signatures.
- UI receives decrypted semantic messages via event stream.
- Exit criteria: end-to-end message flow over multiple lanes.
- Node computes trust tiers and enforces routing/cache policy.
- UI displays trust summaries and policy controls.
- Exit criteria: policy changes affect routing without protocol changes.
- Crash recovery and data migrations.
- Diagnostics UI for lane health, queue, storage, errors.
- Exit criteria: reproducible E2E tests + observability dashboard.
- Endorsement payloads update local WoT policy automatically.
- Policy explanation + update endpoints for diagnostics.
- Persistent shard cache across restarts.
- Inbound QUIC listener for true P2P.
- Object/shard retrieval endpoints for clients.
- Auto discovery/contact exchange (LAN broadcast + gossip + DHT lookup).
- Functional: tag derivation, schema compliance, and ACK behavior
- Resilience: packet loss tolerance and cache churn behavior in
veil-sim - Performance: throughput, p95 end-to-end latency, and cache hit rate baselines
- Transport-agnostic: same shard/object pipeline passes over at least two adapter implementations (e.g., in-memory mock + second lane mock)
- Policy-locality: WoT settings only affect prioritization (forward/cache order), never object validity decisions
- Functional: tag derivation, schema compliance, and ACK behavior (codec + node tests)
- Resilience: packet loss tolerance and cache churn behavior in
veil-sim - Performance: record baseline report (p95 latency/throughput/cache hit rate) from
benchmark_runner(docs/benchmarks/bench_report_2026-02-06.*) - Transport-agnostic: enable CI job with VEIL_E2E_NETWORK=1 for transport smoke test
- Policy-locality: WoT settings only affect prioritization, not validity
- FEC implementation variance -> lock vectors plus deterministic test corpus
- Traffic analysis leakage -> default padding profiles plus bucket normalization
- Transport coupling -> strict transport trait boundaries and adapters
- Cache churn under load -> simulation-driven eviction tuning before API freeze