Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to NAACL SRW 2025

Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning

Shaun Baek, Shaun Esua-Mensah, Cyrus Tsui, Sejan Vigneswaralingam

Abstract

Large Language Models (LLMs) are primarily trained on high-resource natural languages, limiting their effectiveness in low-resource settings and in tasks requiring deep logical reasoning. We introduce Rosetta-PL, a benchmark designed to evaluate LLMs' logical reasoning and generalization capabilities in a controlled environment. We construct Rosetta-PL by translating a dataset of logical propositions from Lean into a custom logical language, which is then used to fine-tune an LLM. The benchmark evaluates whether LLMs can discover logical patterns within a propositional language, thereby measuring reasoning ability without relying on predefined inference steps or extraneous linguistic factors. Our experiments analyze the impact of the size of the dataset and the translation methodology on the performance of the model. The results indicate that preserving logical relationships in the translation process significantly boosts precision, with accuracy plateauing beyond roughly 20,000 training samples.

Citation

Shaun Baek, Shaun Esua-Mensah, Cyrus Tsui, Sejan Vigneswaralingam. "Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning". Accepted to NAACL SRW 2025.

Details

Conference
Accepted to NAACL SRW 2025
Authors
4 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application