ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
- URL: http://arxiv.org/abs/2303.17096v1
- Date: Thu, 30 Mar 2023 02:02:32 GMT
- Title: ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
- Authors: Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue
- Abstract summary: Higher accuracy on ImageNet usually leads to better robustness against different corruptions.
We create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions.
We evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers.
- Score: 45.14977000707886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have shown that higher accuracy on ImageNet usually leads to
better robustness against different corruptions. Therefore, in this paper,
instead of following the traditional research paradigm that investigates new
out-of-distribution corruptions or perturbations deep models may encounter, we
conduct model debugging in in-distribution data to explore which object
attributes a model may be sensitive to. To achieve this goal, we create a
toolkit for object editing with controls of backgrounds, sizes, positions, and
directions, and create a rigorous benchmark named ImageNet-E(diting) for
evaluating the image classifier robustness in terms of object attributes. With
our ImageNet-E, we evaluate the performance of current deep learning models,
including both convolutional neural networks and vision transformers. We find
that most models are quite sensitive to attribute changes. A small change in
the background can lead to an average of 9.23\% drop on top-1 accuracy. We also
evaluate some robust models including both adversarially trained models and
other robust trained models and find that some models show worse robustness
against attribute changes than vanilla models. Based on these findings, we
discover ways to enhance attribute robustness with preprocessing, architecture
designs, and training strategies. We hope this work can provide some insights
to the community and open up a new avenue for research in robust computer
vision. The code and dataset are available at
https://github.com/alibaba/easyrobust.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - ImageNet-X: Understanding Model Mistakes with Factor of Variation
Annotations [36.348968311668564]
We introduce ImageNet-X, a set of sixteen human annotations of factors such as pose, background, or lighting.
We investigate 2,200 current recognition models and study the types of mistakes as a function of model's architecture.
We find models have consistent failure modes across ImageNet-X categories.
arXiv Detail & Related papers (2022-11-03T14:56:32Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - A Comprehensive Study of Image Classification Model Sensitivity to
Foregrounds, Backgrounds, and Visual Attributes [58.633364000258645]
We call this dataset RIVAL10 consisting of roughly $26k$ instances over $10$ classes.
We evaluate the sensitivity of a broad set of models to noise corruptions in foregrounds, backgrounds and attributes.
In our analysis, we consider diverse state-of-the-art architectures (ResNets, Transformers) and training procedures (CLIP, SimCLR, DeiT, Adversarial Training)
arXiv Detail & Related papers (2022-01-26T06:31:28Z) - Automated Cleanup of the ImageNet Dataset by Model Consensus,
Explainability and Confident Learning [0.0]
ImageNet was the backbone of various convolutional neural networks (CNNs) trained on ILSVRC12Net.
This paper describes automated applications based on model consensus, explainability and confident learning to correct labeling mistakes.
The ImageNet-Clean improves the model performance by 2-2.4 % for SqueezeNet and EfficientNet-B0 models.
arXiv Detail & Related papers (2021-03-30T13:16:35Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z) - Rethinking Natural Adversarial Examples for Classification Models [43.87819913022369]
ImageNet-A is a famous dataset of natural adversarial examples.
We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques.
Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models.
arXiv Detail & Related papers (2021-02-23T14:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.