Accepted to Interplay @ COLM 2025
Authors: Dharunish Yugeswardeenoo, Harshil Nukala, Niranjan, Ved Shah
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities but continue to struggle with arithmetic tasks. Prior works largely focus on outputs or prompting strategies, leaving the open question of the internal structure through which models do arithmetic computation. This work investigates whether LLMs encode operator precedence in their internal representations via the open-source instruction-tuned LLaMA 3.2-3B model. We constructed a dataset of arithmetic expressions with three operands and two operators, varying the order and placement of parentheses. Using interpretability techniques such as logit lens, linear classification probes, and UMAP geometric visualization, our results show that intermediate computations are present in the residual stream, particularly after MLP blocks. We introduce "partial embedding swap," a technique that modifies operator precedence by exchanging high-impact embedding dimensions between operators.

