Skip to main content

Spring Deadline: Sunday, February 15 at 11:59 pm PT. Click here to apply.

Back to Research
Accepted to Mech Interp @ NeurIPS 2025

Universal Neurons in GPT-2: Emergence, Persistence, and Functional Impact

Advey Nandan, Cheng-Ting Chou, Amrit Kurakula

Abstract

We investigate neuron universality in independently trained GPT-2 Small models, examining how universal neurons emerge and evolve throughout training. By analyzing five GPT-2 models at three checkpoints (100k, 200k, 300k steps), we identify universal neurons through pairwise correlation analysis of activations over a dataset of 5 million tokens. We find that 1-5% of neurons pass a target threshold of universality compared to random baselines. Ablation experiments reveal significant functional impacts of universal neurons on model predictions. Layer-wise ablation reveals that ablating universal neurons in the first layer causes a disproportionately large increase in both KL divergence and loss, suggesting early-layer universal neurons play a particularly critical role in shaping final predictions.

Citation

Advey Nandan, Cheng-Ting Chou, Amrit Kurakula. "Universal Neurons in GPT-2: Emergence, Persistence, and Functional Impact". Accepted to Mech Interp @ NeurIPS 2025.

Details

Conference
Accepted to Mech Interp @ NeurIPS 2025
Authors
3 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application