ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of
Zoom and Spatial Biases in Image Classification
- URL: http://arxiv.org/abs/2304.05538v4
- Date: Sun, 8 Oct 2023 23:03:33 GMT
- Title: ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of
Zoom and Spatial Biases in Image Classification
- Authors: Mohammad Reza Taesiri, Giang Nguyen, Sarra Habchi, Cor-Paul Bezemer,
Anh Nguyen
- Abstract summary: We show that proper framing of the input image can lead to the correct classification of 98.91% of ImageNet images.
We propose a test-time augmentation (TTA) technique that improves classification accuracy by forcing models to explicitly perform zoom-in operations.
- Score: 9.779748872936912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image classifiers are information-discarding machines, by design. Yet, how
these models discard information remains mysterious. We hypothesize that one
way for image classifiers to reach high accuracy is to first zoom to the most
discriminative region in the image and then extract features from there to
predict image labels, discarding the rest of the image. Studying six popular
networks ranging from AlexNet to CLIP, we find that proper framing of the input
image can lead to the correct classification of 98.91% of ImageNet images.
Furthermore, we uncover positional biases in various datasets, especially a
strong center bias in two popular datasets: ImageNet-A and ObjectNet. Finally,
leveraging our insights into the potential of zooming, we propose a test-time
augmentation (TTA) technique that improves classification accuracy by forcing
models to explicitly perform zoom-in operations before making predictions. Our
method is more interpretable, accurate, and faster than MEMO, a
state-of-the-art (SOTA) TTA method. We introduce ImageNet-Hard, a new benchmark
that challenges SOTA classifiers including large vision-language models even
when optimal zooming is allowed.
Related papers
- xT: Nested Tokenization for Larger Context in Large Images [79.37673340393475]
xT is a framework for vision transformers which aggregates global context with local details.
We are able to increase accuracy by up to 8.6% on challenging classification tasks.
arXiv Detail & Related papers (2024-03-04T10:29:58Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Improving Zero-shot Generalization and Robustness of Multi-modal Models [70.14692320804178]
Multi-modal image-text models such as CLIP and LiT have demonstrated impressive performance on image classification benchmarks.
We investigate the reasons for this performance gap and find that many of the failure cases are caused by ambiguity in the text prompts.
We propose a simple and efficient way to improve accuracy on such uncertain images by making use of the WordNet hierarchy.
arXiv Detail & Related papers (2022-12-04T07:26:24Z) - Robustifying Deep Vision Models Through Shape Sensitization [19.118696557797957]
We propose a simple, lightweight adversarial augmentation technique that explicitly incentivizes the network to learn holistic shapes.
Our augmentations superpose edgemaps from one image onto another image with shuffled patches, using a randomly determined mixing proportion.
We show that our augmentations significantly improve classification accuracy and robustness measures on a range of datasets and neural architectures.
arXiv Detail & Related papers (2022-11-14T11:17:46Z) - Vision Models Are More Robust And Fair When Pretrained On Uncurated
Images Without Supervision [38.22842778742829]
Discriminative self-supervised learning allows training models on any random group of internet images.
We train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn.
We extensively study and validate our model performance on over 50 benchmarks including fairness, to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets.
arXiv Detail & Related papers (2022-02-16T22:26:47Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Reconciliation of Statistical and Spatial Sparsity For Robust Image and
Image-Set Classification [27.319334479994787]
We propose a novel Joint Statistical and Spatial Sparse representation, dubbed textitJ3S, to model the image or image-set data for classification.
We propose to solve the joint sparse coding problem based on the J3S model, by coupling the local and global image representations using joint sparsity.
Experiments show that the proposed J3S-based image classification scheme outperforms the popular or state-of-the-art competing methods over FMD, UIUC, ETH-80 and YTC databases.
arXiv Detail & Related papers (2021-06-01T06:33:24Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Rethinking Natural Adversarial Examples for Classification Models [43.87819913022369]
ImageNet-A is a famous dataset of natural adversarial examples.
We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques.
Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models.
arXiv Detail & Related papers (2021-02-23T14:46:48Z) - High-Performance Large-Scale Image Recognition Without Normalization [34.58818094675353]
Batch normalization is a key component of most image classification models, but it has many undesirable properties.
We develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets.
Our models attain significantly better performance than their batch-normalized counterparts when finetuning on ImageNet after large-scale pre-training.
arXiv Detail & Related papers (2021-02-11T18:23:20Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.