Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to ACL SRW 2025

Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models

Glenn Zhang, Treasure Mayowa, Jason Fan

Abstract

Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. We propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression.

Citation

Glenn Zhang, Treasure Mayowa, Jason Fan. "Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models". Accepted to ACL SRW 2025.

Details

Conference
Accepted to ACL SRW 2025
Authors
3 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application