Understanding the Intrinsic Robustness of Image Distributions using
Conditional Generative Models
- URL: http://arxiv.org/abs/2003.00378v1
- Date: Sun, 1 Mar 2020 01:45:04 GMT
- Title: Understanding the Intrinsic Robustness of Image Distributions using
Conditional Generative Models
- Authors: Xiao Zhang, Jinghui Chen, Quanquan Gu, David Evans
- Abstract summary: We study the intrinsic robustness of two common image benchmarks under $ell$ perturbations.
We show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models.
- Score: 87.00072607024026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Starting with Gilmer et al. (2018), several works have demonstrated the
inevitability of adversarial examples based on different assumptions about the
underlying input probability space. It remains unclear, however, whether these
results apply to natural image distributions. In this work, we assume the
underlying data distribution is captured by some conditional generative model,
and prove intrinsic robustness bounds for a general class of classifiers, which
solves an open problem in Fawzi et al. (2018). Building upon the
state-of-the-art conditional generative models, we study the intrinsic
robustness of two common image benchmarks under $\ell_2$ perturbations, and
show the existence of a large gap between the robustness limits implied by our
theory and the adversarial robustness achieved by current state-of-the-art
robust models. Code for all our experiments is available at
https://github.com/xiaozhanguva/Intrinsic-Rob.
Related papers
- Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think [53.2706196341054]
We show that the perceived inefficiency was caused by a flaw in the inference pipeline that has so far gone unnoticed.
We perform end-to-end fine-tuning on top of the single-step model with task-specific losses and get a deterministic model that outperforms all other diffusion-based depth and normal estimation models.
arXiv Detail & Related papers (2024-09-17T16:58:52Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - The Surprising Harmfulness of Benign Overfitting for Adversarial
Robustness [13.120373493503772]
We prove a surprising result that even if the ground truth itself is robust to adversarial examples, the benignly overfitted model is benign in terms of the standard'' out-of-sample risk objective.
Our finding provides theoretical insights into the puzzling phenomenon observed in practice, where the true target function (e.g., human) is robust against adverasrial attack, while beginly overfitted neural networks lead to models that are not robust.
arXiv Detail & Related papers (2024-01-19T15:40:46Z) - A Practical Upper Bound for the Worst-Case Attribution Deviations [21.341303776931532]
Model attribution is a critical component of deep neural networks (DNNs) for its interpretability to complex models.
Recent studies bring up attention to the security of attribution methods as they are vulnerable to attribution attacks that generate similar images with dramatically different attributions.
Existing works have been investigating empirically improving the robustness of DNNs against those attacks; however, none of them explicitly quantifies the actual deviations of attributions.
In this work, for the first time, a constrained optimization problem is formulated to derive an upper bound that measures the largest dissimilarity of attributions after the samples are perturbed by any noises within a certain region
arXiv Detail & Related papers (2023-03-01T09:07:27Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Image Generation with Multimodal Priors using Denoising Diffusion
Probabilistic Models [54.1843419649895]
A major challenge in using generative models to accomplish this task is the lack of paired data containing all modalities and corresponding outputs.
We propose a solution based on a denoising diffusion probabilistic synthesis models to generate images under multi-model priors.
arXiv Detail & Related papers (2022-06-10T12:23:05Z) - Robustness via Uncertainty-aware Cycle Consistency [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping without corresponding image pairs.
Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method based on Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-10-24T15:33:21Z) - Uncertainty-aware Generalized Adaptive CycleGAN [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping in an unsupervised manner.
Existing methods often learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method called Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-02-23T15:22:35Z) - Measuring Robustness to Natural Distribution Shifts in Image
Classification [67.96056447092428]
We study how robust current ImageNet models are to distribution shifts arising from natural variations in datasets.
We find that there is often little to no transfer of robustness from current synthetic to natural distribution shift.
Our results indicate that distribution shifts arising in real data are currently an open research problem.
arXiv Detail & Related papers (2020-07-01T17:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.