DISCO: Adversarial Defense with Local Implicit Functions
- URL: http://arxiv.org/abs/2212.05630v1
- Date: Sun, 11 Dec 2022 23:54:26 GMT
- Title: DISCO: Adversarial Defense with Local Implicit Functions
- Authors: Chih-Hui Ho, Nuno Vasconcelos
- Abstract summary: A novel aDversarIal defenSe with local impliCit functiOns is proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location.
- Score: 79.39156814887133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of adversarial defenses for image classification, where the goal
is to robustify a classifier against adversarial examples, is considered.
Inspired by the hypothesis that these examples lie beyond the natural image
manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is
proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a
clean RGB value at the location. It is implemented with an encoder and a local
implicit module, where the former produces per-pixel deep features and the
latter uses the features in the neighborhood of query pixel for predicting the
clean RGB value. Extensive experiments demonstrate that both DISCO and its
cascade version outperform prior defenses, regardless of whether the defense is
known to the attacker. DISCO is also shown to be data and parameter efficient
and to mount defenses that transfers across datasets, classifiers and attacks.
Related papers
- To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks [48.66027897216473]
We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
arXiv Detail & Related papers (2022-09-20T17:36:32Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Generating Out of Distribution Adversarial Attack using Latent Space
Poisoning [5.1314136039587925]
We propose a novel mechanism of generating adversarial examples where the actual image is not corrupted.
latent space representation is utilized to tamper with the inherent structure of the image.
As opposed to gradient-based attacks, the latent space poisoning exploits the inclination of classifiers to model the independent and identical distribution of the training dataset.
arXiv Detail & Related papers (2020-12-09T13:05:44Z) - Creating Artificial Modalities to Solve RGB Liveness [79.9255035557979]
We introduce two types of artificial transforms: rank pooling and optical flow, combined in end-to-end pipeline for spoof detection.
The proposed method achieves state-of-the-art on the largest cross-ethnicity face anti-spoofing dataset CASIA-SURF CeFA (RGB)
arXiv Detail & Related papers (2020-06-29T13:19:22Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.