Abstract
We investigate neuron universality in independently trained GPT-2 Small models, examining how universal neurons emerge and evolve throughout training. By analyzing five GPT-2 models at three checkpoints (100k, 200k, 300k steps), we identify universal neurons through pairwise correlation analysis of activations over a dataset of 5 million tokens. We find that 1-5% of neurons pass a target threshold of universality compared to random baselines. Ablation experiments reveal significant functional impacts of universal neurons on model predictions. Layer-wise ablation reveals that ablating universal neurons in the first layer causes a disproportionately large increase in both KL divergence and loss, suggesting early-layer universal neurons play a particularly critical role in shaping final predictions.