Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
- URL: http://arxiv.org/abs/2010.13247v1
- Date: Sun, 25 Oct 2020 23:14:26 GMT
- Title: Attack Agnostic Adversarial Defense via Visual Imperceptible Bound
- Authors: Saheb Chhabra, Akshay Agarwal, Richa Singh, and Mayank Vatsa
- Abstract summary: This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
- Score: 70.72413095698961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high susceptibility of deep learning algorithms against structured and
unstructured perturbations has motivated the development of efficient
adversarial defense algorithms. However, the lack of generalizability of
existing defense algorithms and the high variability in the performance of the
attack algorithms for different databases raises several questions on the
effectiveness of the defense algorithms. In this research, we aim to design a
defense model that is robust within a certain bound against both seen and
unseen adversarial attacks. This bound is related to the visual appearance of
an image, and we termed it as \textit{Visual Imperceptible Bound (VIB)}. To
compute this bound, we propose a novel method that uses the database
characteristics. The VIB is further used to measure the effectiveness of attack
algorithms. The performance of the proposed defense model is evaluated on the
MNIST, CIFAR-10, and Tiny ImageNet databases on multiple attacks that include
C\&W ($l_2$) and DeepFool. The proposed defense model is not only able to
increase the robustness against several attacks but also retain or improve the
classification accuracy on an original clean test set. The proposed algorithm
is attack agnostic, i.e. it does not require any knowledge of the attack
algorithm.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - A Random-patch based Defense Strategy Against Physical Attacks for Face
Recognition Systems [3.6202815454709536]
We propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS)
Our method can be easily applied to the real world face recognition system and extended to other defense methods to boost the detection performance.
arXiv Detail & Related papers (2023-04-16T16:11:56Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame [28.128458352103543]
High-performance object detection networks are vulnerable to adversarial patch attacks.
Person-hiding attacks are emerging as a serious problem in many safety-critical applications.
We propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns.
arXiv Detail & Related papers (2022-04-27T15:18:08Z) - A Game-Theoretic Approach for AI-based Botnet Attack Defence [5.020067709306813]
New generation of botnets leverage Artificial Intelligent (AI) techniques to conceal the identity of botmasters and the attack intention to avoid detection.
There has not been an existing assessment tool capable of evaluating the effectiveness of existing defense strategies against this kind of AI-based botnet attack.
We propose a sequential game theory model that is capable to analyse the details of the potential strategies botnet attackers and defenders could use to reach Nash Equilibrium (NE)
arXiv Detail & Related papers (2021-12-04T02:53:40Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.