Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Interpreting the Latent Structure of Operator Precedence in Language Models

Interpreting the Latent Structure of Operator Precedence in Language Models

December 1, 2025

Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving t...

Accepted to Interplay @ COLM 2025

Authors: Dharunish Yugeswardeenoo, Harshil Nukala, Niranjan, Ved Shah

Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving the open question of the internal structure through which models do arithmetic computation. This work investigates whether LLMs encode operator precedence in their internal representations via the open-source instruction-tuned LLaMA 3.2-3B model. We constructed a dataset of arithmetic expressions with three operands and two operators, varying the order and placement of parentheses. Using interpretability techniques such as logit lens, linear classification probes, and UMAP geometric visualization, our results show that intermediate computations are present in the residual stream, particularly after MLP blocks. We introduce "partial embedding swap," a technique that modifies operator precedence by exchanging high-impact embedding dimensions between operators.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.