Skip to content

co-numina/narra-app

Repository files navigation

NARRA

NARRA

NARRA

Real-time narrative intelligence for Solana
narra.cloud · @narra_cloud


Overview

Memecoins don't move in isolation — they move in narratives. Dog coins pump together. AI tokens rotate as a sector. Political memes spike on the same catalyst. But no existing tool surfaces this. Screeners show individual tokens. NARRA shows the stories underneath them.

NARRA is a real-time narrative intelligence layer for the Solana memecoin ecosystem. It continuously ingests token data from multiple on-chain and off-chain sources, clusters tokens into thematic narratives using a hybrid classification engine, and ranks those narratives by momentum. The result is a live map of what's moving and why — at the narrative level, not the token level.

NARRA Dashboard

Architecture

NARRA runs a multi-stage pipeline on a continuous cycle:

Ingest → Enrich → Cluster → Classify → Score → Persist

Ingest — Multi-source token discovery across the Solana ecosystem. Parallel wave-based fetching with deduplication, rate limit management, and fallback handling. Sources are abstracted behind a normalized interface so new feeds can be added without touching the pipeline.

Enrich — Raw token addresses are resolved to full market profiles: price, market cap, FDV, volume (1h/24h), transaction counts, price deltas, liquidity depth, pair metadata, and boost signals. Tokens below configurable thresholds are filtered.

Cluster — A hybrid clustering engine groups tokens into narratives. The first pass uses a synonym engine with substring matching and n-gram analysis. The second pass runs word frequency extraction to detect emergent themes that don't match any known category. Clusters are built bottom-up — new narratives form organically from token-level signals.

Classify — Tokens that escape the deterministic clustering layer are routed to an AI classifier. The classifier can assign tokens to existing narratives or instantiate new ones, expanding the synonym engine in the process. Classification state is persistent — the system's vocabulary grows with every cycle.

Score — Each narrative cluster is scored by a momentum function that weights volume velocity, transaction acceleration, price momentum, and external signal data. A dedicated high-conviction tier surfaces narratives meeting strict multi-factor thresholds.

Persist — AI state (learned categories, synonym expansions, token assignments) is persisted to a KV store so the classifier's knowledge compounds across cold starts, redeployments, and scaling events. Scan results are cached with configurable TTL for instant reads.

Technical Design

Stateful AI Classification

The classifier is not stateless. It maintains a persistent knowledge graph of narrative categories, their associated synonyms, display metadata, and token assignments. Each classification cycle:

  1. Loads accumulated state from persistent storage
  2. Merges learned synonyms into the deterministic matching layer
  3. Runs classification only on genuinely novel tokens
  4. Persists expanded state back to storage

This means the system gets more accurate over time. Categories that the AI creates in cycle N become part of the deterministic engine in cycle N+1 — reducing API calls and improving latency with every iteration.

Pipeline Resilience

The scan pipeline is designed for graceful degradation:

  • Source failures are isolated — if one feed is down, others continue
  • Enrichment failures fall back to raw token metadata where available
  • Classification failures don't block the response — cached AI state is applied
  • The full pipeline falls back to the most recent cached result on any unrecoverable error

HOT Detection

A separate scoring pass identifies tokens meeting strict momentum criteria across multiple signals simultaneously. These are surfaced in a dedicated tier, tagged with their home narrative for context. The thresholds are tuned to minimize noise — this isn't a volume sort, it's a multi-factor conviction filter.

Stack

Layer Technology
Framework Next.js 14 (App Router)
Runtime Vercel Serverless Functions
Storage Vercel KV (Redis-backed)
AI Claude API
Language TypeScript

Project Structure

app/
├── api/
│   ├── scan/          # Main pipeline orchestrator
│   ├── classify/      # AI classification endpoint
│   ├── cron/          # Background refresh scheduling
│   └── ...
├── page.tsx           # Dashboard UI
└── ...
lib/
├── ai-classifier.ts   # Stateful classification engine
└── ...

Setup

git clone https://github.com/NARRA-SOL/v0-narra.git
cd v0-narra
npm install
cp .env.example .env.local
npm run dev

The first scan takes ~30s as the pipeline warms up. Subsequent loads are served from cache.

Environment

ANTHROPIC_API_KEY=         # AI classification
KV_REST_API_URL=           # Persistent state store
KV_REST_API_TOKEN=         # KV authentication
CRON_SECRET=               # Background job auth

Deployment

Designed for Vercel:

  1. Push to GitHub
  2. Import in Vercel Dashboard
  3. Configure environment variables
  4. Create KV Database (Storage → KV → link to project)
  5. Deploy

Background cron handles refresh cycles. All reads are served from persistent cache.

API

GET /api/scan

Returns the current narrative map: clustered tokens, momentum scores, top movers, and classifier state.

Param Description
force=1 Bypass cache, trigger fresh pipeline run

Response includes cluster-level aggregates (volume, transactions, price momentum, FDV), per-token market data, AI classification metadata, and pipeline diagnostics.


built by @co_numina

About

Real-time narrative intelligence layer for the Solana memecoin ecosystem

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors