Selective Ensembles for Consistent Predictions
- URL: http://arxiv.org/abs/2111.08230v1
- Date: Tue, 16 Nov 2021 05:03:56 GMT
- Title: Selective Ensembles for Consistent Predictions
- Authors: Emily Black and Klas Leino and Matt Fredrikson
- Abstract summary: inconsistency is undesirable in high-stakes contexts.
We show that this inconsistency extends beyond predictions to feature attributions.
We prove that selective ensembles achieve consistent predictions and feature attributions while maintaining low abstention rates.
- Score: 19.154189897847804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown that models trained to the same objective, and which
achieve similar measures of accuracy on consistent test data, may nonetheless
behave very differently on individual predictions. This inconsistency is
undesirable in high-stakes contexts, such as medical diagnosis and finance. We
show that this inconsistent behavior extends beyond predictions to feature
attributions, which may likewise have negative implications for the
intelligibility of a model, and one's ability to find recourse for subjects. We
then introduce selective ensembles to mitigate such inconsistencies by applying
hypothesis testing to the predictions of a set of models trained using
randomly-selected starting conditions; importantly, selective ensembles can
abstain in cases where a consistent outcome cannot be achieved up to a
specified confidence level. We prove that that prediction disagreement between
selective ensembles is bounded, and empirically demonstrate that selective
ensembles achieve consistent predictions and feature attributions while
maintaining low abstention rates. On several benchmark datasets, selective
ensembles reach zero inconsistently predicted points, with abstention rates as
low 1.5%.
Related papers
- Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Confidence on the Focal: Conformal Prediction with Selection-Conditional Coverage [6.010965256037659]
Conformal prediction builds marginally valid prediction intervals that cover the unknown outcome of a randomly drawn new test point with a prescribed probability.
In such cases, marginally valid conformal prediction intervals may not provide valid coverage for the focal unit(s) due to selection bias.
This paper presents a general framework for constructing a prediction set with finite-sample exact coverage conditional on the unit being selected.
arXiv Detail & Related papers (2024-03-06T17:18:24Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - Invariant Probabilistic Prediction [45.90606906307022]
We show that arbitrary distribution shifts do not, in general, admit invariant and robust probabilistic predictions.
We propose a method to yield invariant probabilistic predictions, called IPP, and study the consistency of the underlying parameters.
arXiv Detail & Related papers (2023-09-18T18:50:24Z) - Generalization within in silico screening [19.58677466616286]
In silico screening uses predictive models to select a batch of compounds with favorable properties from a library for experimental validation.
By extending learning theory, we show that the selectivity of the selection policy can significantly impact generalization.
We show that generalization can be markedly enhanced when considering a model's ability to predict the fraction of desired outcomes in a batch.
arXiv Detail & Related papers (2023-07-18T16:01:01Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Selective Regression Under Fairness Criteria [30.672082160544996]
In some cases, the performance of minority group can decrease while we reduce the coverage.
We show that such an unwanted behavior can be avoided if we can construct features satisfying the sufficiency criterion.
arXiv Detail & Related papers (2021-10-28T19:05:12Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.