Rashomon Capacity: A Metric for Predictive Multiplicity in Probabilistic
Classification
- URL: http://arxiv.org/abs/2206.01295v1
- Date: Thu, 2 Jun 2022 20:44:19 GMT
- Title: Rashomon Capacity: A Metric for Predictive Multiplicity in Probabilistic
Classification
- Authors: Hsiang Hsu and Flavio du Pin Calmon
- Abstract summary: Predictive multiplicity occurs when classification models assign conflicting predictions to individual samples.
We introduce a new measure of predictive multiplicity in probabilistic classification called Rashomon Capacity.
We show that Rashomon Capacity yields principled strategies for disclosing conflicting models to stakeholders.
- Score: 4.492630871726495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive multiplicity occurs when classification models with nearly
indistinguishable average performances assign conflicting predictions to
individual samples. When used for decision-making in applications of
consequence (e.g., lending, education, criminal justice), models developed
without regard for predictive multiplicity may result in unjustified and
arbitrary decisions for specific individuals. We introduce a new measure of
predictive multiplicity in probabilistic classification called Rashomon
Capacity. Prior metrics for predictive multiplicity focus on classifiers that
output thresholded (i.e., 0-1) predicted classes. In contrast, Rashomon
Capacity applies to probabilistic classifiers, capturing more nuanced score
variations for individual samples. We provide a rigorous derivation for
Rashomon Capacity, argue its intuitive appeal, and demonstrate how to estimate
it in practice. We show that Rashomon Capacity yields principled strategies for
disclosing conflicting models to stakeholders. Our numerical experiments
illustrate how Rashomon Capacity captures predictive multiplicity in various
datasets and learning models, including neural networks. The tools introduced
in this paper can help data scientists measure, report, and ultimately resolve
predictive multiplicity prior to model deployment.
Related papers
- An Experimental Study on the Rashomon Effect of Balancing Methods in Imbalanced Classification [0.0]
This paper examines the impact of balancing methods on predictive multiplicity using the Rashomon effect.
It is crucial because the blind model selection in data-centric AI is risky from a set of approximately equally accurate models.
arXiv Detail & Related papers (2024-03-22T13:08:22Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Dropout-Based Rashomon Set Exploration for Efficient Predictive
Multiplicity Estimation [15.556756363296543]
Predictive multiplicity refers to the phenomenon in which classification tasks admit multiple competing models that achieve almost-equally-optimal performance.
We propose a novel framework that utilizes dropout techniques for exploring models in the Rashomon set.
We show that our technique consistently outperforms baselines in terms of the effectiveness of predictive multiplicity metric estimation.
arXiv Detail & Related papers (2024-02-01T16:25:00Z) - Deep Neural Network Benchmarks for Selective Classification [27.098996474946446]
Multiple selective classification frameworks exist, most of which rely on deep neural network architectures.
We evaluate these approaches using several criteria, including selective error rate, empirical coverage, distribution of rejected instance's classes, and performance on out-of-distribution instances.
arXiv Detail & Related papers (2024-01-23T12:15:47Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Predictive Multiplicity in Probabilistic Classification [25.111463701666864]
We present a framework for measuring predictive multiplicity in probabilistic classification.
We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.
Our results emphasize the need to report predictive multiplicity more widely.
arXiv Detail & Related papers (2022-06-02T16:25:29Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.