Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images
- URL: http://arxiv.org/abs/2111.15518v1
- Date: Sun, 28 Nov 2021 20:36:27 GMT
- Title: Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images
- Authors: Dvij Kalaria, Aritra Hazra and Partha Pratim Chakrabarti
- Abstract summary: Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations.
We show how CVAEs can be effectively used to detect adversarial attacks on image classification networks.
- Score: 0.7734726150561086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid advancement and increased use of deep learning models in image
identification, security becomes a major concern to their deployment in
safety-critical systems. Since the accuracy and robustness of deep learning
models are primarily attributed from the purity of the training samples,
therefore the deep learning architectures are often susceptible to adversarial
attacks. Adversarial attacks are often obtained by making subtle perturbations
to normal images, which are mostly imperceptible to humans, but can seriously
confuse the state-of-the-art machine learning models. What is so special in the
slightest intelligent perturbations or noise additions over normal images that
it leads to catastrophic classifications by the deep neural networks? Using
statistical hypothesis testing, we find that Conditional Variational
AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image
perturbations. In this paper, we show how CVAEs can be effectively used to
detect adversarial attacks on image classification networks. We demonstrate our
results over MNIST, CIFAR-10 dataset and show how our method gives comparable
performance to the state-of-the-art methods in detecting adversaries while not
getting confused with noisy images, where most of the existing methods falter.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Deep Learning-Based Anomaly Detection in Synthetic Aperture Radar
Imaging [11.12267144061017]
Our approach considers anomalies as abnormal patterns that deviate from their surroundings but without any prior knowledge of their characteristics.
Our proposed method aims to address these issues through a self-supervised algorithm.
Experiments are performed to show the advantages of our method compared to the conventional Reed-Xiaoli algorithm.
arXiv Detail & Related papers (2022-10-28T10:22:29Z) - Towards Adversarial Purification using Denoising AutoEncoders [0.8701566919381223]
Adversarial attacks are often obtained by making subtle perturbations to normal images, which are mostly imperceptible to humans.
We propose a framework, named APuDAE, leveraging Denoising AutoEncoders (DAEs) to purify these samples by using them in an adaptive way.
We show how our framework provides comparable and in most cases better performance to the baseline methods in purifying adversaries.
arXiv Detail & Related papers (2022-08-29T19:04:25Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Robust Sensible Adversarial Learning of Deep Neural Networks for Image
Classification [6.594522185216161]
We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness.
Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy.
We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation.
arXiv Detail & Related papers (2022-05-20T22:57:44Z) - A Study for Universal Adversarial Attacks on Texture Recognition [19.79803434998116]
We show that there exist small image-agnostic/univesal perturbations that can fool the deep learning models with more than 80% of testing fooling rates on all tested texture datasets.
The computed perturbations using various attack methods on the tested datasets are generally quasi-imperceptible, containing structured patterns with low, middle and high frequency components.
arXiv Detail & Related papers (2020-10-04T08:11:11Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Improved Detection of Adversarial Images Using Deep Neural Networks [2.3993545400014873]
Recent studies indicate that machine learning models used for classification tasks are vulnerable to adversarial examples.
We propose a new approach called Feature Map Denoising to detect the adversarial inputs.
We show the performance of detection on a mixed dataset consisting of adversarial examples.
arXiv Detail & Related papers (2020-07-10T19:02:24Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.