Accepted to ACL SRW 2024
Authors: Dharunish Yugeswardeenoo
Although LLMs have the potential to transform many fields, they still underperform humans in reasoning tasks. Existing methods induce the model to produce step-by-step calculations. We propose Question Analysis Prompting (QAP), a simple zero-shot prompting strategy that induces the model to first explain the question before solving. The value of n influences the length of response generated by the model. This method is adaptable to various problem difficulties and shows promising results in math and commonsense reasoning across different model sizes. QAP is evaluated on GPT 3.5 Turbo and GPT 4 Turbo on arithmetic datasets GSM8K, AQuA, and SAT and commonsense dataset StrategyQA.

