A comprehensive review of classifier probability calibration metrics

Tracking #: 937-1917

Authors:

NameORCID
Richard LaneORCID logo https://orcid.org/0000-0003-3741-0348


Responsible editor: 

Francesca D. Faraci

Submission Type: 

Survey Paper

Abstract: 

Probabilities or confidence values produced by artificial intelligence (AI) and machine learning (ML) models often do not reflect their true accuracy, with some models being under or overconfident in their predictions. For example, if a model is 80% sure of an outcome, is it correct 80% of the time? Probability calibration metrics measure the discrepancy between confidence and accuracy, providing an independent assessment of model calibration performance that complements traditional accuracy metrics. Understanding calibration is important when the outputs of multiple systems are combined, to avoid overconfident subsystems dominating the output. Such awareness also underpins assurance in safety or business-critical contexts and builds user trust in models. This paper provides a comprehensive review of probability calibration metrics for classifier models, organizing them according to multiple groupings to highlight their relationships. We identify 94 metrics, and group them into four main families: point-based, bin-based, kernel or curve-based, and cumulative. For each metric, we catalogue properties of interest and provide equations in a unified notation, facilitating implementation and comparison by future researchers. Finally, we provide recommendations for which metrics should be used in different situations.

Manuscript: 

Supplementary Files (optional): 

Previous Version: 

Tags: 

  • Under Review

Data repository URLs: 

none

Date of Submission: 

Thursday, November 27, 2025


Nanopublication URLs: