The Effects of Image Distribution and Task on Adversarial Robustness
- URL: http://arxiv.org/abs/2102.10534v1
- Date: Sun, 21 Feb 2021 07:15:50 GMT
- Title: The Effects of Image Distribution and Task on Adversarial Robustness
- Authors: Owen Kunhardt, Arturo Deza, Tomaso Poggio
- Abstract summary: We propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model.
We used this adversarial robustness metric on models of an MNIST, CIFAR-10, and a Fusion dataset.
- Score: 4.597864989500202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an adaptation to the area under the curve (AUC)
metric to measure the adversarial robustness of a model over a particular
$\epsilon$-interval $[\epsilon_0, \epsilon_1]$ (interval of adversarial
perturbation strengths) that facilitates unbiased comparisons across models
when they have different initial $\epsilon_0$ performance. This can be used to
determine how adversarially robust a model is to different image distributions
or task (or some other variable); and/or to measure how robust a model is
comparatively to other models. We used this adversarial robustness metric on
models of an MNIST, CIFAR-10, and a Fusion dataset (CIFAR-10 + MNIST) where
trained models performed either a digit or object recognition task using a
LeNet, ResNet50, or a fully connected network (FullyConnectedNet) architecture
and found the following: 1) CIFAR-10 models are inherently less adversarially
robust than MNIST models; 2) Both the image distribution and task that a model
is trained on can affect the adversarial robustness of the resultant model. 3)
Pretraining with a different image distribution and task sometimes carries over
the adversarial robustness induced by that image distribution and task in the
resultant model; Collectively, our results imply non-trivial differences of the
learned representation space of one perceptual system over another given its
exposure to different image statistics or tasks (mainly objects vs digits).
Moreover, these results hold even when model systems are equalized to have the
same level of performance, or when exposed to approximately matched image
statistics of fusion images but with different tasks.
Related papers
- A Robust Adversarial Ensemble with Causal (Feature Interaction) Interpretations for Image Classification [9.945272787814941]
We present a deep ensemble model that combines discriminative features with generative models to achieve both high accuracy and adversarial robustness.
Our approach integrates a bottom-level pre-trained discriminative network for feature extraction with a top-level generative classification network that models adversarial input distributions.
arXiv Detail & Related papers (2024-12-28T05:06:20Z) - Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study [61.65123150513683]
multimodal foundation models, such as CLIP, produce state-of-the-art zero-shot results.
It is reported that these models close the robustness gap by matching the performance of supervised models trained on ImageNet.
We show that CLIP leads to a significant robustness drop compared to supervised ImageNet models on our benchmark.
arXiv Detail & Related papers (2024-03-15T17:33:49Z) - Image Similarity using An Ensemble of Context-Sensitive Models [2.9490616593440317]
We present a more intuitive approach to build and compare image similarity models based on labelled data.
We address the challenges of sparse sampling in the image space (R, A, B) and biases in the models trained with context-based data.
Our testing results show that the ensemble model constructed performs 5% better than the best individual context-sensitive models.
arXiv Detail & Related papers (2024-01-15T20:23:05Z) - Effective Robustness against Natural Distribution Shifts for Models with
Different Training Data [113.21868839569]
"Effective robustness" measures the extra out-of-distribution robustness beyond what can be predicted from the in-distribution (ID) performance.
We propose a new evaluation metric to evaluate and compare the effective robustness of models trained on different data.
arXiv Detail & Related papers (2023-02-02T19:28:41Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - MDN-VO: Estimating Visual Odometry with Confidence [34.8860186009308]
Visual Odometry (VO) is used in many applications including robotics and autonomous systems.
We propose a deep learning-based VO model to estimate 6-DoF poses, as well as a confidence model for these estimates.
Our experiments show that the proposed model exceeds state-of-the-art performance in addition to detecting failure cases.
arXiv Detail & Related papers (2021-12-23T19:26:04Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Uncertainty-aware Generalized Adaptive CycleGAN [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping in an unsupervised manner.
Existing methods often learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method called Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-02-23T15:22:35Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.