Understanding out-of-distribution accuracies through quantifying
difficulty of test samples
- URL: http://arxiv.org/abs/2203.15100v1
- Date: Mon, 28 Mar 2022 21:13:41 GMT
- Title: Understanding out-of-distribution accuracies through quantifying
difficulty of test samples
- Authors: Berfin Simsek, Melissa Hall, Levent Sagun
- Abstract summary: Existing works show that although modern neural networks achieve remarkable generalization performance on the in-distribution (ID) dataset, the accuracy drops significantly on the out-of-distribution (OOD) datasets.
We propose a new metric to quantify the difficulty of the test images (either ID or OOD) that depends on the interaction of the training dataset and the model.
- Score: 10.266928164137635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing works show that although modern neural networks achieve remarkable
generalization performance on the in-distribution (ID) dataset, the accuracy
drops significantly on the out-of-distribution (OOD) datasets
\cite{recht2018cifar, recht2019imagenet}. To understand why a variety of models
consistently make more mistakes in the OOD datasets, we propose a new metric to
quantify the difficulty of the test images (either ID or OOD) that depends on
the interaction of the training dataset and the model. In particular, we
introduce \textit{confusion score} as a label-free measure of image difficulty
which quantifies the amount of disagreement on a given test image based on the
class conditional probabilities estimated by an ensemble of trained models.
Using the confusion score, we investigate CIFAR-10 and its OOD derivatives.
Next, by partitioning test and OOD datasets via their confusion scores, we
predict the relationship between ID and OOD accuracies for various
architectures. This allows us to obtain an estimator of the OOD accuracy of a
given model only using ID test labels. Our observations indicate that the
biggest contribution to the accuracy drop comes from images with high confusion
scores. Upon further inspection, we report on the nature of the misclassified
images grouped by their confusion scores: \textit{(i)} images with high
confusion scores contain \textit{weak spurious correlations} that appear in
multiple classes in the training data and lack clear \textit{class-specific
features}, and \textit{(ii)} images with low confusion scores exhibit spurious
correlations that belong to another class, namely \textit{class-specific
spurious correlations}.
Related papers
- Decompose-and-Compose: A Compositional Approach to Mitigating Spurious Correlation [2.273629240935727]
We propose Decompose-and-Compose (DaC) to improve correlation shift by combining elements of images.
Based on our observations, models trained with Empirical Risk Minimization (ERM) usually highly attend to either the causal components or the components having a high spurious correlation with the label.
We propose a group-balancing method by intervening on images without requiring group labels or information regarding the spurious features during training.
arXiv Detail & Related papers (2024-02-29T07:24:24Z) - Common-Sense Bias Discovery and Mitigation for Classification Tasks [16.8259488742528]
We propose a framework to extract feature clusters in a dataset based on image descriptions.
The analyzed features and correlations are human-interpretable, so we name the method Common-Sense Bias Discovery (CSBD)
Experiments show that our method discovers novel biases on multiple classification tasks for two benchmark image datasets.
arXiv Detail & Related papers (2024-01-24T03:56:07Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Estimating label quality and errors in semantic segmentation data via
any model [19.84626033109009]
We study methods to score label quality, such that the images with the lowest scores are least likely to be correctly labeled.
This helps prioritize what data to review in order to ensure a high-quality training/evaluation dataset.
arXiv Detail & Related papers (2023-07-11T07:29:09Z) - Spawrious: A Benchmark for Fine Control of Spurious Correlation Biases [8.455991178281469]
We present benchmark-O2O, M2M-Easy, Medium, Hard, an image classification benchmark suite containing spurious correlations between classes and backgrounds.
The resulting dataset is of high quality and contains approximately 152k images.
arXiv Detail & Related papers (2023-03-09T18:22:12Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Overinterpretation reveals image classification model pathologies [15.950659318117694]
convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features.
We demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation.
Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy.
arXiv Detail & Related papers (2020-03-19T17:12:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.