Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings
- URL: http://arxiv.org/abs/2309.01620v1
- Date: Mon, 4 Sep 2023 14:08:34 GMT
- Title: Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings
- Authors: AprilPyone MaungMaung, Isao Echizen, Hitoshi Kiya
- Abstract summary: We propose a new key-based defense focusing on both efficiency and robustness.
We build upon the previous defense with two major improvements: (1) efficient training and (2) optional randomization.
Experiments were carried out on the ImageNet dataset, and the proposed defense was evaluated against an arsenal of state-of-the-art attacks.
- Score: 13.604830818397629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new key-based defense focusing on both efficiency
and robustness. Although the previous key-based defense seems effective in
defending against adversarial examples, carefully designed adaptive attacks can
bypass the previous defense, and it is difficult to train the previous defense
on large datasets like ImageNet. We build upon the previous defense with two
major improvements: (1) efficient training and (2) optional randomization. The
proposed defense utilizes one or more secret patch embeddings and classifier
heads with a pre-trained isotropic network. When more than one secret
embeddings are used, the proposed defense enables randomization on inference.
Experiments were carried out on the ImageNet dataset, and the proposed defense
was evaluated against an arsenal of state-of-the-art attacks, including
adaptive ones. The results show that the proposed defense achieves a high
robust accuracy and a comparable clean accuracy compared to the previous
key-based defense.
Related papers
- Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Closing the Gap: Achieving Better Accuracy-Robustness Tradeoffs against Query-Based Attacks [1.54994260281059]
We show how to efficiently establish, at test-time, a solid tradeoff between robustness and accuracy when mitigating query-based attacks.
Our approach is independent of training and supported by theory.
arXiv Detail & Related papers (2023-12-15T17:02:19Z) - Adversarial Attacks Neutralization via Data Set Randomization [3.655021726150369]
Adversarial attacks on deep learning models pose a serious threat to their reliability and security.
We propose a new defense mechanism that is rooted on hyperspace projection.
We show that our solution increases the robustness of deep learning models against adversarial attacks.
arXiv Detail & Related papers (2023-06-21T10:17:55Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - On the Limitations of Stochastic Pre-processing Defenses [42.80542472276451]
Defending against adversarial examples remains an open problem.
A common belief is that randomness at inference increases the cost of finding adversarial inputs.
In this paper, we investigate such pre-processing defenses and demonstrate that they are flawed.
arXiv Detail & Related papers (2022-06-19T21:54:42Z) - MAD-VAE: Manifold Awareness Defense Variational Autoencoder [0.0]
We introduce several methods to improve the robustness of defense models.
With extensive experiments on MNIST data set, we have demonstrated the effectiveness of our algorithms.
We also discuss the applicability of existing adversarial latent space attacks as they may have a significant flaw.
arXiv Detail & Related papers (2020-10-31T09:04:25Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.