A Tale Of Two Long Tails
- URL: http://arxiv.org/abs/2107.13098v1
- Date: Tue, 27 Jul 2021 22:49:59 GMT
- Title: A Tale Of Two Long Tails
- Authors: Daniel D'souza, Zach Nussbaum, Chirag Agarwal, Sara Hooker
- Abstract summary: We identify examples the model is uncertain about and characterize the source of said uncertainty.
We investigate whether the rate of learning in the presence of additional information differs between atypical and noisy examples.
Our results show that well-designed interventions over the course of training can be an effective way to characterize and distinguish between different sources of uncertainty.
- Score: 4.970364068620608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning models are increasingly employed to assist human
decision-makers, it becomes critical to communicate the uncertainty associated
with these model predictions. However, the majority of work on uncertainty has
focused on traditional probabilistic or ranking approaches - where the model
assigns low probabilities or scores to uncertain examples. While this captures
what examples are challenging for the model, it does not capture the underlying
source of the uncertainty. In this work, we seek to identify examples the model
is uncertain about and characterize the source of said uncertainty. We explore
the benefits of designing a targeted intervention - targeted data augmentation
of the examples where the model is uncertain over the course of training. We
investigate whether the rate of learning in the presence of additional
information differs between atypical and noisy examples? Our results show that
this is indeed the case, suggesting that well-designed interventions over the
course of training can be an effective way to characterize and distinguish
between different sources of uncertainty.
Related papers
- An Ambiguity Measure for Recognizing the Unknowns in Deep Learning [0.0]
We study the understanding of deep neural networks from the scope in which they are trained on.
We propose a measure for quantifying the ambiguity of inputs for any given model.
arXiv Detail & Related papers (2023-12-11T02:57:12Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - A Data-Driven Measure of Relative Uncertainty for Misclassification
Detection [25.947610541430013]
We introduce a data-driven measure of uncertainty relative to an observer for misclassification detection.
By learning patterns in the distribution of soft-predictions, our uncertainty measure can identify misclassified samples.
We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods.
arXiv Detail & Related papers (2023-06-02T17:32:03Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity [10.144058870887061]
We argue that individuals can be harmed when one predictor is chosen ad hoc from a group of equally well performing models.
Our findings suggest that such unfairness can be readily found in real life and it may be difficult to mitigate by technical means alone.
arXiv Detail & Related papers (2022-03-14T14:33:39Z) - Uncertainty estimation under model misspecification in neural network
regression [3.2622301272834524]
We study the effect of the model choice on uncertainty estimation.
We highlight that under model misspecification, aleatoric uncertainty is not properly captured.
arXiv Detail & Related papers (2021-11-23T10:18:41Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Identifying Causal-Effect Inference Failure with Uncertainty-Aware
Models [41.53326337725239]
We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods.
We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data.
We show that correctly modeling uncertainty can keep us from giving overconfident and potentially harmful recommendations.
arXiv Detail & Related papers (2020-07-01T00:37:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.