Record: 11L XSA+EMA+TTT, sliding val_bpb=1.1254 (3-seed mean 1.1256)#338
Open
alertcat wants to merge 9 commits intoopenai:mainfrom
Open
Record: 11L XSA+EMA+TTT, sliding val_bpb=1.1254 (3-seed mean 1.1256)#338alertcat wants to merge 9 commits intoopenai:mainfrom
alertcat wants to merge 9 commits intoopenai:mainfrom
Conversation
Innovation over PR openai#198 (SOTA 1.1318): - 12 transformer layers (was 11): +2.2M params, better representation - Int5 quantization for MLP weights [-16,15]: 3 zero high bits - zstd compression 1.88x vs int6 1.51x, saves ~1.8MB - Funds the 12th layer within 16MB budget - Int6 kept for attention weights (precision-sensitive) - FA3 fallback for older PyTorch - LR=0.025 (validated as optimal in A/B testing) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RyanLisse
added a commit
to RyanLisse/parameter-golf
that referenced
this pull request
Mar 21, 2026
New CUDA presets: - pr332_12l_xsa: 12L/2xMLP, seq2048, momentum 0.99 (from PR openai#332) - pr338_11l_ttt: 11L/2xMLP, seq2048, momentum 0.99 (from PR openai#338) - bft_ensemble: 9L/3xMLP Byzantine fault tolerant checkpoint config - difficulty_adjusted: 10L/2xMLP adaptive search with tight LR - partial_rope_headtemp: baseline arch with novel attention params Expanded search: NUM_LAYERS includes 11, TRAIN_SEQ_LEN includes 4096. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
DigitalSword99
pushed a commit
to DigitalSword99/parameter-golf
that referenced
this pull request
Mar 21, 2026
- Move EMA shadow weights to GPU (CPU transfers cost ~32% throughput) - Increase train seq_len from 1024 to 2048 (matches record PR openai#338) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
translatingthename
added a commit
to translatingthename/parameter-golf
that referenced
this pull request
Mar 22, 2026
3-seed mean: 1.1371 (seeds 42, 7, 2024) Dynamic evaluation (Krause et al., ICML 2018) applied during sliding window scoring. 2.0% consistent bpb improvement at zero artifact cost. Built on PR openai#315 (jfprincz) and PR openai#338 (alertcat).
yahya010
pushed a commit
to yahya010/parameter-golf
that referenced
this pull request
Mar 22, 2026
v21: 11L + no-QAT + SWA + TTT + SmearGate + OrthoInit (1.1393 BPB) v24: PR openai#338 SOTA stack (partial RoPE, LN scale, late QAT, XSA4, EMA) run_modal.py: Modal cloud runner for 8xH100 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ThomAub
pushed a commit
to ThomAub/parameter-golf
that referenced
this pull request
Mar 22, 2026
Many TTT submissions (openai#136, openai#152, openai#254, openai#264, openai#338, openai#398, openai#417, openai#421, openai#442) flagged as potentially invalid for adapting on eval tokens BEFORE scoring them. Added correct score-then-adapt protocol with implementation guide. https://claude.ai/code/session_01M5XTtyz2Zdq5BDeh9qNn9y
sahiee-dev
added a commit
to sahiee-dev/parameter-golf
that referenced
this pull request
Mar 23, 2026
New addition: EMA (decay=0.9999) shadow model, eval uses EMA weights. EMA coexists with SWA. Zero artifact cost. Consistent with PR openai#338 (best open PR, 1.1254 bpb) which also uses EMA. 11th layer ruled out: needs ~0.91MB, only ~0.36MB budget available. Full stack on thwu1 base (1.1428): - TrigramHash(20480, dim=32): trigram embeddings, bigram 10240->4096 - XSA: orthogonal self-value removal, last 4 layers (PR openai#287) - EMA: decay=0.9999, shadow model used at final eval - TTT: 3-epoch SGD on val tokens, all ranks, ~47s budget Artifact: ~15.64MB. H100 validation pending.
sahiee-dev
added a commit
to sahiee-dev/parameter-golf
that referenced
this pull request
Mar 23, 2026
T4 ablation (1000 steps, 4 variants): V2 bigram=10240 no trigram: 5.4379 loss WINNER V4 bigram=8192 + trigram=8192: 5.6956 loss V3 bigram=4096 + trigram=20480: 5.7924 loss (was our submission) V1 bigram=4096 no trigram: 5.8414 loss TrigramHash adds noise, bigram reduction actively hurts. Restored bigram=10240. Stack is now: XSA + EMA + TTT on thwu1 base. These are proven techniques (XSA from PR openai#287, EMA+TTT from PR openai#338 lineage) applied cleanly on the openai#1 submission.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
11L XSA + EMA + TTT + Int6 MLP3x
val_bpb = 1.1254 (sliding window stride=64, best seed 42) | 15.55 MB artifact | 8xH100 SXM, 600s
Key Innovation: TTT on XSA+EMA baseline
First submission combining XSA (Exclusive Self Attention) + EMA + Test-Time Training. After training and quantization, TTT performs 3 epochs of SGD fine-tuning on the validation token stream, adapting the model to the test distribution.
Results (3-seed, 8xH100 SXM)
Mean: 1.1256 | Std: 0.0002
TTT Details
Architecture (from PR #315)
Eval Timing
Training: 600s | TTT: 47s | Sliding eval: 73s | Total eval: ~120s
Reproduction
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: zstandard in e:�naconda\lib\site-packages (0.23.0)
Built on PR #315 (XSA, EMA, SmearGate, BigramHash, OrthoInit, sliding window eval).