Skip to content

Releases: romizone/simulasiLLM

πŸš€ v1.0.0 β€” Initial Public Release

14 Feb 16:02

Choose a tag to compare

🧠 Transformer Explainer v1.0.0

The first public release of the interactive LLM Attention Simulation.

✨ Features

  • πŸ”€ Token Embedding Visualization β€” Unicode-aware tokenization with positional encoding
  • πŸ”‘ Q/K/V Projection Inspector β€” Examine Query, Key, Value vectors per attention head
  • πŸ—ΊοΈ Masked Self-Attention Heatmap β€” Interactive attention matrix with causal masking
  • πŸ“Š Next-Token Probability Distribution β€” Real-time softmax output with candidate tokens
  • ⚑ Autoregressive Text Generation β€” Step-by-step token generation with attention updates
  • πŸŽ›οΈ Adjustable Sampling β€” Temperature (0.3–1.5), Top-k (1–12), generation length controls
  • πŸ‘οΈ Multi-Head Attention View β€” Switch between heads to compare learned patterns

πŸ—οΈ Simulated Architecture

Param Value
Model GPT-2 Small
Layers 12
Attention Heads 4
Hidden Size 16
Causal Mask Active

πŸ› οΈ Tech

  • Pure HTML + CSS + Vanilla JavaScript
  • Zero dependencies, no build step
  • Deployed on Vercel

πŸ”— Links