Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to Interplay @ COLM 2025

Interpreting the Latent Structure of Operator Precedence in Language Models

Dharunish Yugeswardeenoo, Harshil Nukala, Niranjan, Ved Shah

Abstract

Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving the open question of the internal structure through which models do arithmetic computation. This work investigates whether LLMs encode operator precedence in their internal representations via the open-source instruction-tuned LLaMA 3.2-3B model. We constructed a dataset of arithmetic expressions with three operands and two operators, varying the order and placement of parentheses. Using interpretability techniques such as logit lens, linear classification probes, and UMAP geometric visualization, our results show that intermediate computations are present in the residual stream, particularly after MLP blocks. We introduce "partial embedding swap," a technique that modifies operator precedence by exchanging high-impact embedding dimensions between operators.

Citation

Dharunish Yugeswardeenoo, Harshil Nukala, Niranjan, Ved Shah. "Interpreting the Latent Structure of Operator Precedence in Language Models". Accepted to Interplay @ COLM 2025.

Details

Conference
Accepted to Interplay @ COLM 2025
Authors
4 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application