Defending Adversarial Examples via DNN Bottleneck Reinforcement
- URL: http://arxiv.org/abs/2008.05230v1
- Date: Wed, 12 Aug 2020 11:02:01 GMT
- Title: Defending Adversarial Examples via DNN Bottleneck Reinforcement
- Authors: Wenqing Liu, Miaojing Shi, Teddy Furon, Li Li
- Abstract summary: This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
- Score: 20.08619981108837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a DNN bottleneck reinforcement scheme to alleviate the
vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
Typical DNN classifiers encode the input image into a compressed latent
representation more suitable for inference. This information bottleneck makes a
trade-off between the image-specific structure and class-specific information
in an image. By reinforcing the former while maintaining the latter, any
redundant information, be it adversarial or not, should be removed from the
latent representation. Hence, this paper proposes to jointly train an
auto-encoder (AE) sharing the same encoding weights with the visual classifier.
In order to reinforce the information bottleneck, we introduce the multi-scale
low-pass objective and multi-scale high-frequency communication for better
frequency steering in the network. Unlike existing approaches, our scheme is
the first reforming defense per se which keeps the classifier structure
untouched without appending any pre-processing head and is trained with clean
images only. Extensive experiments on MNIST, CIFAR-10 and ImageNet demonstrate
the strong defense of our method against various adversarial attacks.
Related papers
- Efficient Image-to-Image Diffusion Classifier for Adversarial Robustness [24.465567005078135]
Diffusion models (DMs) have demonstrated great potential in the field of adversarial robustness.
DMs require huge computational costs due to the usage of large-scale pre-trained DMs.
We introduce an efficient Image-to-Image diffusion classifier with a pruned U-Net structure and reduced diffusion timesteps.
Our method achieves better adversarial robustness with fewer computational costs than DM-based and CNN-based methods.
arXiv Detail & Related papers (2024-08-16T03:01:07Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - DeepCert: Verification of Contextually Relevant Robustness for Neural
Network Image Classifiers [16.893762648621266]
We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations.
arXiv Detail & Related papers (2021-03-02T10:41:16Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations [11.334887948796611]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.
Most effective current defense is to train the network using adversarially perturbed examples.
In this paper, we investigate a radically different, neuro-inspired defense mechanism.
arXiv Detail & Related papers (2020-11-21T21:03:08Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - ADRN: Attention-based Deep Residual Network for Hyperspectral Image
Denoising [52.01041506447195]
We propose an attention-based deep residual network to learn a mapping from noisy HSI to the clean one.
Experimental results demonstrate that our proposed ADRN scheme outperforms the state-of-the-art methods both in quantitative and visual evaluations.
arXiv Detail & Related papers (2020-03-04T08:36:27Z) - Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for
Making a CNN Classifier Robust Against Adversarial Attacks [13.813609420433238]
We propose Code-Bridged (CBC), a framework for making a Convolutional Neural Network robust against adversarial attacks.
We illustrate that this network is more robust to adversarial examples but also has a significantly lower computational complexity when compared to the prior art defenses.
arXiv Detail & Related papers (2020-01-16T22:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.