Accepted to ACL SRW 2025
Authors: Glenn Zhang, Treasure Mayowa, Jason Fan
Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. We propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression.

