Adversarial Examples Detection with Enhanced Image Difference Features
based on Local Histogram Equalization
- URL: http://arxiv.org/abs/2305.04436v1
- Date: Mon, 8 May 2023 03:14:01 GMT
- Title: Adversarial Examples Detection with Enhanced Image Difference Features
based on Local Histogram Equalization
- Authors: Zhaoxia Yin and Shaowei Zhu and Hang Su and Jianteng Peng and Wanli
Lyu and Bin Luo
- Abstract summary: We propose an adversarial example detection framework based on a high-frequency information enhancement strategy.
This framework can effectively extract and amplify the feature differences between adversarial examples and normal examples.
- Score: 20.132066800052712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have recently made significant progress in many
fields. However, studies have shown that DNNs are vulnerable to adversarial
examples, where imperceptible perturbations can greatly mislead DNNs even if
the full underlying model parameters are not accessible. Various defense
methods have been proposed, such as feature compression and gradient masking.
However, numerous studies have proven that previous methods create detection or
defense against certain attacks, which renders the method ineffective in the
face of the latest unknown attack methods. The invisibility of adversarial
perturbations is one of the evaluation indicators for adversarial example
attacks, which also means that the difference in the local correlation of
high-frequency information in adversarial examples and normal examples can be
used as an effective feature to distinguish the two. Therefore, we propose an
adversarial example detection framework based on a high-frequency information
enhancement strategy, which can effectively extract and amplify the feature
differences between adversarial examples and normal examples. Experimental
results show that the feature augmentation module can be combined with existing
detection models in a modular way under this framework. Improve the detector's
performance and reduce the deployment cost without modifying the existing
detection model.
Related papers
- Detecting Adversarial Examples [24.585379549997743]
We propose a novel method to detect adversarial examples by analyzing the layer outputs of Deep Neural Networks.
Our method is highly effective, compatible with any DNN architecture, and applicable across different domains, such as image, video, and audio.
arXiv Detail & Related papers (2024-10-22T21:42:59Z) - AFLOW: Developing Adversarial Examples under Extremely Noise-limited
Settings [7.828994881163805]
deep neural networks (DNNs) are vulnerable to adversarial attacks.
We propose a novel Normalize Flow-based end-to-end attack framework, called AFLOW, to synthesize imperceptible adversarial examples.
Compared with existing methods, AFLOW exhibit superiority in imperceptibility, image quality and attack capability.
arXiv Detail & Related papers (2023-10-15T10:54:07Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Beating Attackers At Their Own Games: Adversarial Example Detection
Using Adversarial Gradient Directions [16.993439721743478]
The proposed method is based on the observation that the directions of adversarial gradients play a key role in characterizing the adversarial space.
Experiments conducted on two different databases, CIFAR-10 and ImageNet, show that the proposed detection method achieves 97.9% and 98.6% AUC-ROC on five different adversarial attacks.
arXiv Detail & Related papers (2020-12-31T01:12:24Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - FADER: Fast Adversarial Example Rejection [19.305796826768425]
Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations.
We introduce FADER, a novel technique for speeding up detection-based methods.
Our experiments outline up to 73x prototypes reduction compared to analyzed detectors for MNIST dataset and up to 50x for CIFAR10 respectively.
arXiv Detail & Related papers (2020-10-18T22:00:11Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.