Rethinking Natural Adversarial Examples for Classification Models
- URL: http://arxiv.org/abs/2102.11731v1
- Date: Tue, 23 Feb 2021 14:46:48 GMT
- Title: Rethinking Natural Adversarial Examples for Classification Models
- Authors: Xiao Li, Jianmin Li, Ting Dai, Jie Shi, Jun Zhu, Xiaolin Hu
- Abstract summary: ImageNet-A is a famous dataset of natural adversarial examples.
We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques.
Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models.
- Score: 43.87819913022369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, it was found that many real-world examples without intentional
modifications can fool machine learning models, and such examples are called
"natural adversarial examples". ImageNet-A is a famous dataset of natural
adversarial examples. By analyzing this dataset, we hypothesized that large,
cluttered and/or unusual background is an important reason why the images in
this dataset are difficult to be classified. We validated the hypothesis by
reducing the background influence in ImageNet-A examples with object detection
techniques. Experiments showed that the object detection models with various
classification models as backbones obtained much higher accuracy than their
corresponding classification models. A detection model based on the
classification model EfficientNet-B7 achieved a top-1 accuracy of 53.95%,
surpassing previous state-of-the-art classification models trained on ImageNet,
suggesting that accurate localization information can significantly boost the
performance of classification models on ImageNet-A. We then manually cropped
the objects in images from ImageNet-A and created a new dataset, named
ImageNet-A-Plus. A human test on the new dataset showed that the deep
learning-based classifiers still performed quite poorly compared with humans.
Therefore, the new dataset can be used to study the robustness of
classification models to the internal variance of objects without considering
the background disturbance.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [45.14977000707886]
Higher accuracy on ImageNet usually leads to better robustness against different corruptions.
We create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions.
We evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers.
arXiv Detail & Related papers (2023-03-30T02:02:32Z) - Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object
Classification [47.64219291655723]
We introduce a new test set, called D2O, which is sufficiently different from existing test sets.
Our dataset contains 8,060 images spread across 36 categories, out of which 29 appear in ImageNet.
The best Top-1 accuracy on our dataset is around 60% which is much lower than 91% best Top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2023-01-29T19:58:32Z) - Natural Adversarial Objects [10.940015831720144]
We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models.
NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios.
arXiv Detail & Related papers (2021-11-07T23:42:55Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.