Releases: romizone/simulasiLLM
Releases Β· romizone/simulasiLLM
π v1.0.0 β Initial Public Release
π§ Transformer Explainer v1.0.0
The first public release of the interactive LLM Attention Simulation.
β¨ Features
- π€ Token Embedding Visualization β Unicode-aware tokenization with positional encoding
- π Q/K/V Projection Inspector β Examine Query, Key, Value vectors per attention head
- πΊοΈ Masked Self-Attention Heatmap β Interactive attention matrix with causal masking
- π Next-Token Probability Distribution β Real-time softmax output with candidate tokens
- β‘ Autoregressive Text Generation β Step-by-step token generation with attention updates
- ποΈ Adjustable Sampling β Temperature (0.3β1.5), Top-k (1β12), generation length controls
- ποΈ Multi-Head Attention View β Switch between heads to compare learned patterns
ποΈ Simulated Architecture
| Param | Value |
|---|---|
| Model | GPT-2 Small |
| Layers | 12 |
| Attention Heads | 4 |
| Hidden Size | 16 |
| Causal Mask | Active |
π οΈ Tech
- Pure HTML + CSS + Vanilla JavaScript
- Zero dependencies, no build step
- Deployed on Vercel
π Links
- π Live Demo: simulasillm.vercel.app
- π Research Paper: paper-llm-attention.vercel.app