Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations
- URL: http://arxiv.org/abs/2007.06189v1
- Date: Mon, 13 Jul 2020 05:00:09 GMT
- Title: Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations
- Authors: Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon
- Abstract summary: We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
- Score: 83.60161052867534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide variety of works have explored the reason for the existence of
adversarial examples, but there is no consensus on the explanation. We propose
to treat the DNN logits as a vector for feature representation, and exploit
them to analyze the mutual influence of two independent inputs based on the
Pearson correlation coefficient (PCC). We utilize this vector representation to
understand adversarial examples by disentangling the clean images and
adversarial perturbations, and analyze their influence on each other. Our
results suggest a new perspective towards the relationship between images and
universal perturbations: Universal perturbations contain dominant features, and
images behave like noise to them. This feature perspective leads to a new
method for generating targeted universal adversarial perturbations using random
source images. We are the first to achieve the challenging task of a targeted
universal attack without utilizing original training data. Our approach using a
proxy dataset achieves comparable performance to the state-of-the-art baselines
which utilize the original training dataset.
Related papers
- Separating common from salient patterns with Contrastive Representation
Learning [2.250968907999846]
Contrastive Analysis aims at separating common factors of variation between two datasets.
Current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations.
We propose to leverage the ability of Contrastive Learning to learn semantically expressive representations well adapted for Contrastive Analysis.
arXiv Detail & Related papers (2024-02-19T08:17:13Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Exploiting Image Translations via Ensemble Self-Supervised Learning for
Unsupervised Domain Adaptation [0.0]
We introduce an unsupervised domain adaption (UDA) strategy that combines multiple image translations, ensemble learning and self-supervised learning in one coherent approach.
We focus on one of the standard tasks of UDA in which a semantic segmentation model is trained on labeled synthetic data together with unlabeled real-world data.
arXiv Detail & Related papers (2021-07-13T16:43:02Z) - Contrastive Separative Coding for Self-supervised Representation
Learning [37.697375719184926]
We propose a self-supervised learning approach, namely Contrastive Separative Coding (CSC)
First, a multi-task separative encoder is built to extract shared separable and discriminative embedding.
Second, we propose a powerful cross-attention mechanism performed over speaker representations across various interfering conditions.
arXiv Detail & Related papers (2021-03-01T07:32:00Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.