AdvJND: Generating Adversarial Examples with Just Noticeable Difference
- URL: http://arxiv.org/abs/2002.00179v2
- Date: Tue, 23 Jun 2020 09:34:17 GMT
- Title: AdvJND: Generating Adversarial Examples with Just Noticeable Difference
- Authors: Zifei Zhang, Kai Qiao, Lingyun Jiang, Linyuan Wang, and Bin Yan
- Abstract summary: Adding small perturbations on examples causes a good-performance model to misclassify the crafted examples.
Adversarial examples generated by our AdvJND algorithm yield distributions similar to those of the original inputs.
- Score: 3.638233924421642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared with traditional machine learning models, deep neural networks
perform better, especially in image classification tasks. However, they are
vulnerable to adversarial examples. Adding small perturbations on examples
causes a good-performance model to misclassify the crafted examples, without
category differences in the human eyes, and fools deep models successfully.
There are two requirements for generating adversarial examples: the attack
success rate and image fidelity metrics. Generally, perturbations are increased
to ensure the adversarial examples' high attack success rate; however, the
adversarial examples obtained have poor concealment. To alleviate the tradeoff
between the attack success rate and image fidelity, we propose a method named
AdvJND, adding visual model coefficients, just noticeable difference
coefficients, in the constraint of a distortion function when generating
adversarial examples. In fact, the visual subjective feeling of the human eyes
is added as a priori information, which decides the distribution of
perturbations, to improve the image quality of adversarial examples. We tested
our method on the FashionMNIST, CIFAR10, and MiniImageNet datasets. Adversarial
examples generated by our AdvJND algorithm yield gradient distributions that
are similar to those of the original inputs. Hence, the crafted noise can be
hidden in the original inputs, thus improving the attack concealment
significantly.
Related papers
- Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement [3.0820287240219795]
We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning.
Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples.
We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods.
arXiv Detail & Related papers (2024-04-18T00:41:32Z) - Transcending Adversarial Perturbations: Manifold-Aided Adversarial
Examples with Legitimate Semantics [10.058463432437659]
Deep neural networks were significantly vulnerable to adversarial examples manipulated by malicious tiny perturbations.
In this paper, we propose a supervised semantic-transformation generative model to generate adversarial examples with real and legitimate semantics.
Experiments on MNIST and industrial defect datasets showed that our adversarial examples not only exhibited better visual quality but also achieved superior attack transferability.
arXiv Detail & Related papers (2024-02-05T15:25:40Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Adversarial Examples Detection with Enhanced Image Difference Features
based on Local Histogram Equalization [20.132066800052712]
We propose an adversarial example detection framework based on a high-frequency information enhancement strategy.
This framework can effectively extract and amplify the feature differences between adversarial examples and normal examples.
arXiv Detail & Related papers (2023-05-08T03:14:01Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Generating Unrestricted Adversarial Examples via Three Parameters [11.325135016306165]
A proposed adversarial attack generates an unrestricted adversarial example with a limited number of parameters.
It obtains an average success rate of 93.5% in terms of human evaluation on the MNIST and SVHN datasets.
It also reduces the model accuracy by an average of 73% on six datasets.
arXiv Detail & Related papers (2021-03-13T07:20:14Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.