Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals
- URL: http://arxiv.org/abs/2505.13118v1
- Date: Mon, 19 May 2025 13:49:05 GMT
- Title: Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals
- Authors: Marouane Il Idrissi, Agathe Fernandes Machado, Ewen Gallic, Arthur Charpentier,
- Abstract summary: We propose a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP)<n>We define cooperative games where CP interval properties-such as width and bounds-serve as value functions, we attribute predictive uncertainty to input features.<n>Our experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach.
- Score: 0.3495246564946556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.
Related papers
- From Abstract to Actionable: Pairwise Shapley Values for Explainable AI [0.8192907805418583]
We propose Pairwise Shapley Values, a novel framework that grounds feature attributions in explicit, human-relatable comparisons.<n>Our method introduces pairwise reference selection combined with single-value imputation to deliver intuitive, model-agnostic explanations.<n>We demonstrate that Pairwise Shapley Values enhance interpretability across diverse regression and classification scenarios.
arXiv Detail & Related papers (2025-02-18T04:20:18Z) - Efficient distributional regression trees learning algorithms for calibrated non-parametric probabilistic forecasts [0.0]
In the context of regression, instead of estimating a conditional mean, this can be achieved by producing a predictive interval for the output.<n>This paper introduces novel algorithms for learning probabilistic regression trees for the WIS or CRPS loss functions.
arXiv Detail & Related papers (2025-02-07T18:39:35Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
We study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation.<n>First, we derive a novel high-dimensional probability convergence guarantee that depends explicitly on the variance and holds under weak conditions.<n>We further establish refined high-dimensional Berry-Esseen bounds over the class of convex sets that guarantee faster rates than those in the literature.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Ensemble Prediction via Covariate-dependent Stacking [0.0]
This study proposes a novel approach to ensemble prediction, called co-dependent stacking'' (CDST)
Unlike traditional stacking methods, CDST allows model weights to vary flexibly as a function of covariates, thereby enhancing predictive performance in complex scenarios.
Our findings suggest that the CDST is especially valuable for, but not limited to,temporal-temporal prediction problems, offering a powerful tool for researchers and practitioners in various data analysis fields.
arXiv Detail & Related papers (2024-08-19T07:31:31Z) - Probabilistically Plausible Counterfactual Explanations with Normalizing Flows [2.675793767640172]
We present PPCEF, a novel method for generating probabilistically plausible counterfactual explanations.
Our method enforces plausibility by directly optimizing the explicit density function without assuming a particular family of parametrized distributions.
PPCEF is a powerful tool for interpreting machine learning models and for improving fairness, accountability, and trust in AI systems.
arXiv Detail & Related papers (2024-05-27T20:24:03Z) - Energy-Based Model for Accurate Estimation of Shapley Values in Feature Attribution [7.378438977893025]
EmSHAP (Energy-based model for Shapley value estimation) is proposed to estimate the expectation of Shapley contribution function.<n>GRU (Gated Recurrent Unit)-coupled partition function estimation method is introduced.
arXiv Detail & Related papers (2024-04-01T12:19:33Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.