I-trustworthy Models. A framework for trustworthiness evaluation of probabilistic classifiers
- URL: http://arxiv.org/abs/2501.15617v1
- Date: Sun, 26 Jan 2025 17:54:43 GMT
- Title: I-trustworthy Models. A framework for trustworthiness evaluation of probabilistic classifiers
- Authors: Ritwik Vashistha, Arya Farahi,
- Abstract summary: This work formalizes I-trustworthy framework -- a novel framework for assessing the trustworthiness of probabilistics for inference tasks.
To assess I-trustworthiness, we use the local calibration error (LCE) and develop a method of hypothesis-testing.
We present a diagnostic tool designed to identify and measure biases in cases of miscalibration.
- Score: 0.18416014644193066
- License:
- Abstract: As probabilistic models continue to permeate various facets of our society and contribute to scientific advancements, it becomes a necessity to go beyond traditional metrics such as predictive accuracy and error rates and assess their trustworthiness. Grounded in the competence-based theory of trust, this work formalizes I-trustworthy framework -- a novel framework for assessing the trustworthiness of probabilistic classifiers for inference tasks by linking local calibration to trustworthiness. To assess I-trustworthiness, we use the local calibration error (LCE) and develop a method of hypothesis-testing. This method utilizes a kernel-based test statistic, Kernel Local Calibration Error (KLCE), to test local calibration of a probabilistic classifier. This study provides theoretical guarantees by offering convergence bounds for an unbiased estimator of KLCE. Additionally, we present a diagnostic tool designed to identify and measure biases in cases of miscalibration. The effectiveness of the proposed test statistic is demonstrated through its application to both simulated and real-world datasets. Finally, LCE of related recalibration methods is studied, and we provide evidence of insufficiency of existing methods to achieve I-trustworthiness.
Related papers
- Distribution-Free Calibration of Statistical Confidence Sets [2.283561089098417]
We introduce two novel methods, TRUST and TRUST++, for calibrating confidence sets to achieve distribution-free conditional coverage.
We demonstrate that our methods outperform existing approaches, particularly in small-sample regimes.
arXiv Detail & Related papers (2024-11-28T20:45:59Z) - Quantifying calibration error in modern neural networks through evidence based theory [0.0]
This paper introduces a novel framework for quantifying the trustworthiness of neural networks by incorporating subjective logic into the evaluation of Expected Error (ECE)
We demonstrate the effectiveness of this approach through experiments on MNIST and CIFAR-10 datasets where post-calibration results indicate improved trustworthiness.
The proposed framework offers a more interpretable and nuanced assessment of AI models, with potential applications in sensitive domains such as healthcare and autonomous systems.
arXiv Detail & Related papers (2024-10-31T23:54:21Z) - Beyond Calibration: Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence [2.2359781747539396]
Deep networks often suffer from overconfidence and misaligned predictive distributions.
We introduce a metric, Conditional Congruence Error (CCE), that uses conditional kernel mean embeddings to estimate the distance between the learned predictive distribution and the empirical, conditional distribution in a dataset.
We show that using to measure congruence 1) accurately quantifies misalignment between distributions when the data generating process is known, 2) effectively scales to real-world, high dimensional image regression tasks, and 3) can be used to gauge model reliability on unseen instances.
arXiv Detail & Related papers (2024-05-20T23:30:07Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Model-free generalized fiducial inference [0.0]
I propose and develop ideas for a model-free statistical framework for imprecise probabilistic prediction inference.
This framework facilitates uncertainty quantification in the form of prediction sets that offer finite sample control of type 1 errors.
I consider the theoretical and empirical properties of a precise probabilistic approximation to the model-free imprecise framework.
arXiv Detail & Related papers (2023-07-24T01:58:48Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator [0.17188280334580192]
We argue that any confidence estimates based upon standard machine learning point prediction algorithms are fundamentally flawed.
We present MACEst, a Model Agnostic Confidence Estimator, which provides reliable and trustworthy confidence estimates.
arXiv Detail & Related papers (2021-09-02T14:34:06Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.