Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for
Making a CNN Classifier Robust Against Adversarial Attacks
- URL: http://arxiv.org/abs/2001.06099v1
- Date: Thu, 16 Jan 2020 22:16:58 GMT
- Title: Code-Bridged Classifier (CBC): A Low or Negative Overhead Defense for
Making a CNN Classifier Robust Against Adversarial Attacks
- Authors: Farnaz Behnia, Ali Mirzaeian, Mohammad Sabokrou, Sai Manoj, Tinoosh
Mohsenin, Khaled N. Khasawneh, Liang Zhao, Houman Homayoun, Avesta Sasan
- Abstract summary: We propose Code-Bridged (CBC), a framework for making a Convolutional Neural Network robust against adversarial attacks.
We illustrate that this network is more robust to adversarial examples but also has a significantly lower computational complexity when compared to the prior art defenses.
- Score: 13.813609420433238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose Code-Bridged Classifier (CBC), a framework for
making a Convolutional Neural Network (CNNs) robust against adversarial attacks
without increasing or even by decreasing the overall models' computational
complexity. More specifically, we propose a stacked encoder-convolutional
model, in which the input image is first encoded by the encoder module of a
denoising auto-encoder, and then the resulting latent representation (without
being decoded) is fed to a reduced complexity CNN for image classification. We
illustrate that this network not only is more robust to adversarial examples
but also has a significantly lower computational complexity when compared to
the prior art defenses.
Related papers
- $ε$-VAE: Denoising as Visual Decoding [61.29255979767292]
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space.
Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input.
We propose denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder.
We evaluate our approach by assessing both reconstruction (rFID) and generation quality (
arXiv Detail & Related papers (2024-10-05T08:27:53Z) - Collaborative Auto-encoding for Blind Image Quality Assessment [17.081262827258943]
Blind image quality assessment (BIQA) is a challenging problem with important real-world applications.
Recent efforts attempting to exploit powerful representations by deep neural networks (DNN) are hindered by the lack of subjectively annotated data.
This paper presents a novel BIQA method which overcomes this fundamental obstacle.
arXiv Detail & Related papers (2023-05-24T03:45:03Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Optimising for Interpretability: Convolutional Dynamic Alignment
Networks [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which are optimised to transform their inputs with dynamically computed weight vectors that align with task-relevant patterns.
CoDA Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-09-27T12:39:46Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Latent Code-Based Fusion: A Volterra Neural Network Approach [21.25021807184103]
We propose a deep structure encoder using the recently introduced Volterra Neural Networks (VNNs)
We show that the proposed approach demonstrates a much-improved sample complexity over CNN-based auto-encoder with a superb robust classification performance.
arXiv Detail & Related papers (2021-04-10T18:29:01Z) - Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder [75.84152924972462]
Many real-world applications use Siamese networks to efficiently match text sequences at scale.
This paper pre-trains language models dedicated to sequence matching in Siamese architectures.
arXiv Detail & Related papers (2021-02-18T08:08:17Z) - A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations [11.334887948796611]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks.
Most effective current defense is to train the network using adversarially perturbed examples.
In this paper, we investigate a radically different, neuro-inspired defense mechanism.
arXiv Detail & Related papers (2020-11-21T21:03:08Z) - Defending Adversarial Examples via DNN Bottleneck Reinforcement [20.08619981108837]
This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
arXiv Detail & Related papers (2020-08-12T11:02:01Z) - Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder [11.701729403940798]
We propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks.
Our framework applies to all block-based convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-05-06T01:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.