Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models

Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models

December 1, 2025

Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment b...

Accepted to ACL SRW 2025

Authors: Glenn Zhang, Treasure Mayowa, Jason Fan

Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. We propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.