The purpose of this repository is to collaboratively determine the optimal way to train small-scale language models. We begin with Andrej Karpathy's PyTorch GPT-2 trainer from llm.c, which attains 3.28 validation cross-entropy loss on the FineWeb dataset after training for 45 minutes on 8 NVIDIA H100 GPUs. We then iteratively improve the trainer in order to attain the same level of performance in less wallclock time. The current iteration reaches the same performance as Karpathy's original GPT-2 trainer in:
- 3.4 minutes on 8xH100 (original trainer needed 45)
- 0.73B tokens (original trainer needed 10B)
This improvement in training performance was brought about by the following techniques:
- Modernized architecture: Rotary embeddings, QK-Norm, and ReLU^2
- Muon optimizer [writeup] [code]
- Untied head from embedding
- Projection and classification layers initialized to zero (muP-like)
- Logit softcapping (following Gemma 2)
- Skip connections from the embedding to residual stream junctions
- Extra embeddings which are mixed into the values in attention layers (inspired by Zhou et al. 2024)
- FlexAttention with window size warmup
Contributors list (growing with each new record): @Grad62304977, @jxbz, @bozavlado, @brendanh0gan, @KoszarskyB, @fernbear.bsky.social, @leloykun, @YouJiacheng, & @kellerjordan0
To run the current record, run the following commands.
git clone https://github.com/KellerJordan/modded-nanogpt.git && cd modded-nanogpt
pip install -r requirements.txt
pip install --pre torch==2.6.0.dev20241231+cu126 --index-url https://download.pytorch.org/whl/nightly/cu126 --upgrade # install torch 2.6.0
python data/cached_fineweb10B.py 8 # downloads only the first 0.8B training tokens to save time
./run.sh
The result will be a transformer with 124M active parameters trained for 1390 steps on 0.73B tokens of Fineweb [1], achieving ~3.279 mean validation loss (with 0.002 inter-run stddev). For comparison, the default llm.c PyTorch trainer yields >3.28 validation loss after training for 19560 steps on 10B tokens.
Note: torch.compile will take a long time on the first run.
For cases where CUDA or NCCL versions aren't compatible with your current system setup, Docker can be a helpful alternative. This approach standardizes versions for CUDA, NCCL, CUDNN, and Python, reducing dependency issues and simplifying setup. Note: an NVIDIA driver must already be installed on the system (useful if only the NVIDIA driver and Docker are available).
sudo docker build -t modded-nanogpt .
sudo docker run -it --rm --gpus all -v $(pwd):/modded-nanogpt modded-nanogpt python data/cached_fineweb10B.py 10
sudo docker run -it --rm --gpus all -v $(pwd):/modded-nanogpt modded-nanogpt sh run.sh
The following is the progression of world records for the task of training a neural network to 3.28 validation loss on FineWeb in the minimal amount of time on an 8xH100 machine.
# | Record time | Description | Date | Log | Contributors |
---|---|---|---|---|---|
1 | 45 minutes | llm.c baseline | 05/28/24 | log | @karpathy, llm.c contributors |
2 | 31.4 minutes | Architectural modernizations & tuned learning rate | 06/06/24 | log | @kellerjordan0 |
3 | 24.9 minutes | Introduced the Muon optimizer | 10/04/24 | none | @kellerjordan0, @jxbz |
4 | 22.3 minutes | Muon improvements | 10/11/24 | log | @kellerjordan0, @bozavlado |
5 | 15.2 minutes | Pad embeddings & architectural improvements | 10/14/24 | log | @Grad62304977, @kellerjordan0 |
6 | 13.1 minutes | Distributed the overhead of Muon | 10/18/24 | log | @kellerjordan0 |
7 | 12.0 minutes | Upgraded PyTorch from 2.4.1 to 2.5.0 | 10/18/24 | log | @kellerjordan0 |
8 | 10.8 minutes | Untied embed and lm_head | 11/03/24 | log | @Grad62304977, @kellerjordan0 |
9 | 8.2 minutes | Shortcuts & tweaks | 11/06/24 | log | @Grad62304977, @kellerjordan0 |
10 | 7.8 minutes | Bfloat16 activations | 11/08/24 | log | @kellerjordan0 |
11 | 7.2 minutes | U-net & 2x lr | 11/10/24 | log | @brendanh0gan |
12 | 5.03 minutes | FlexAttention | 11/19/24 | log | @KoszarskyB |
13 | 4.66 minutes | Attention window warmup | 11/24/24 | log | @fernbear.bsky.social |
14 | 4.41 minutes | Value Embeddings | 12/04/24 | log | @KoszarskyB |
15 | 3.95 minutes | U-net pattern for value embeds, assorted code improvements | 12/08/24 | log | @leloykun, @YouJiacheng |
16 | 3.80 minutes | Various code optimizations | 12/10/24 | log | @YouJiacheng |
17 | 3.57 minutes | Sparsify value embeddings, improve rotary, drop an attn layer | 12/17/24 | log | @YouJiacheng |
18 | 3.4 minutes | Lower logit softcap from 30 to 15 | 01/04/25 | log | @KoszarskyB |
New record submissions must:
- Not modify the train or validation data pipelines. (You can change the batch size, sequence length, attention structure etc.; just don't change the underlying streams of tokens.)
- Attain ≤ 3.28 mean val loss. (Due to inter-run variance, submissions must provide enough run logs to attain a statistical significance level of p<0.01 that their mean val loss is ≤ 3.28. Example code to compute p-value can be found here.)
Other than that, anything and everything is fair game!
A: The officially stated goal of NanoGPT speedrunning is as follows: gotta go fast
. But for something a little more verbose involving an argument for good benchmarking, here's some kind of manifesto, adorned with a blessing from the master. https://x.com/karpathy/status/1846790537262571739
A: Because it is a competitive benchmark. In particular, if you attain a new speed record (using whatever method you want), there is an open invitation for you to post that record (on arXiv or X) and thereby vacuum up all the clout for yourself. I will even help you do it by reposting you as much as I can.
Q: NanoGPT speedrunning is cool and all, but meh it probably won't scale and is just overfitting to val loss
A: This is hard to refute, since "at scale" is an infinite category (what if the methods stop working only for >100T models?), making it impossible to fully prove. Also, I would agree that some of the methods used in the speedrun are unlikely to scale, particularly those which impose additional structure on the network, such as logit softcapping. But if the reader cares about 1.5B models, they might be convinced by this result:
Straightforwardly scaling up the speedrun (10/18/24 version) to 1.5B parameters yields a model with GPT-2 (1.5B)-level HellaSwag performance 2.5x more cheaply than @karpathy's baseline ($233 instead of $576):
Muon is defined as follows:
Where NewtonSchulz5 is the following Newton-Schulz iteration [2, 3], which approximately replaces G
with U @ V.T
where U, S, V = G.svd()
.
@torch.compile
def zeroth_power_via_newtonschulz5(G, steps=5, eps=1e-7):
assert len(G.shape) == 2
a, b, c = (3.4445, -4.7750, 2.0315)
X = G.bfloat16() / (G.norm() + eps)
if G.size(0) > G.size(1):
X = X.T
for _ in range(steps):
A = X @ X.T
B = b * A + c * A @ A
X = a * X + B @ X
if G.size(0) > G.size(1):
X = X.T
return X.to(G.dtype)
For this training scenario, Muon has the following favorable properties:
- Lower memory usage than Adam
- ~1.5x better sample-efficiency
- <2% wallclock overhead
Many of the choices made to generate this optimizer were obtained experimentally by our pursuit of CIFAR-10 speedrunning. In particular, we experimentally obtained the following practices:
- Using Nesterov momentum inside the update, with orthogonalization applied after momentum.
- Using a specifically quintic Newton-Schulz iteration as the method of orthogonalization.
- Using non-convergent coefficients for the quintic polynomial in order to maximize slope at zero, and thereby minimize the number of necessary Newton-Schulz iterations. It turns out that the variance doesn't actually matter that much, so we end up with a quintic that rapidly converges to the range 0.68, 1.13 upon repeated application, rather than converging more slowly to 1.
- Running the Newton-Schulz iteration in bfloat16 (whereas Shampoo implementations often depend on inverse-pth-roots run in fp32 or fp64).
Our use of a Newton-Schulz iteration for orthogonalization traces to Bernstein & Newhouse (2024), who suggested it as a way to compute Shampoo [5, 6] preconditioners, and theoretically explored Shampoo without preconditioner accumulation. In particular, Jeremy Bernstein @jxbz sent us the draft, which caused us to experiment with various Newton-Schulz iterations as the orthogonalization method for this optimizer. If we had used SVD instead of a Newton-Schulz iteration, this optimizer would have been too slow to be useful. Bernstein & Newhouse also pointed out that Shampoo without preconditioner accumulation is equivalent to steepest descent in the spectral norm, and therefore Shampoo can be thought of as a way to smooth out spectral steepest descent. The proposed optimizer can be thought of as a second way of smoothing spectral steepest descent, with a different set of memory and runtime tradeoffs compared to Shampoo.
- To run experiments on fewer GPUs, simply modify
run.sh
to have a different--nproc_per_node
. This should not change the behavior of the training. - If you're running out of memory, you may need to reduce the sequence length for FlexAttention (which does change the training. see here for a guide)
- Guilherme Penedo et al. "The fineweb datasets: Decanting the web for the finest text data at scale." arXiv preprint arXiv:2406.17557 (2024).
- Nicholas J. Higham. Functions of Matrices. Society for Industrial and Applied Mathematics (2008). Equation 5.22.
- Günther Schulz. Iterative Berechnung der reziproken Matrix. Z. Angew. Math. Mech., 13:57–59 (1933).
- Jeremy Bernstein and Laker Newhouse. "Old Optimizer, New Norm: An Anthology." arxiv preprint arXiv:2409.20325 (2024).
- Vineet Gupta, Tomer Koren, and Yoram Singer. "Shampoo: Preconditioned stochastic tensor optimization." International Conference on Machine Learning. PMLR, 2018.
- Rohan Anil et al. "Scalable second order optimization for deep learning." arXiv preprint arXiv:2002.09018 (2020).
- Alexander Hägele et al. "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations." arXiv preprint arXiv:2405.18392 (2024).
- Zhanchao Zhou et al. "Value Residual Learning For Alleviating Attention Concentration In Transformers." arXiv preprint arXiv:2410.17897 (2024).
- Team, Gemma, et al. "Gemma 2: Improving open language models at a practical size." arXiv preprint arXiv:2408.00118 (2024).
- Alec Radford et al. "Language models are unsupervised multitask learners." OpenAI blog 1.8 (2019).
@misc{modded_nanogpt_2024,
author = {Keller Jordan and Jeremy Bernstein and Brendan Rappazzo and
@fernbear.bsky.social and Boza Vlado and You Jiacheng and
Franz Cesista and Braden Koszarsky and @Grad62304977},
title = {modded-nanogpt: Speedrunning the NanoGPT baseline},
year = {2024},
url = {https://github.com/KellerJordan/modded-nanogpt}
}