Wavelets Beat Monkeys at Adversarial Robustness
- URL: http://arxiv.org/abs/2304.09403v1
- Date: Wed, 19 Apr 2023 03:41:30 GMT
- Title: Wavelets Beat Monkeys at Adversarial Robustness
- Authors: Jingtong Su and Julia Kempe
- Abstract summary: We show how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
- Score: 0.8702432681310401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on improving the robustness of neural networks to adversarial noise
- imperceptible malicious perturbations of the data - has received significant
attention. The currently uncontested state-of-the-art defense to obtain robust
deep neural networks is Adversarial Training (AT), but it consumes
significantly more resources compared to standard training and trades off
accuracy for robustness. An inspiring recent work [Dapello et al.] aims to
bring neurobiological tools to the question: How can we develop Neural Nets
that robustly generalize like human vision? [Dapello et al.] design a network
structure with a neural hidden first layer that mimics the primate primary
visual cortex (V1), followed by a back-end structure adapted from current CNN
vision models. It seems to achieve non-trivial adversarial robustness on
standard vision benchmarks when tested on small perturbations. Here we revisit
this biologically inspired work, and ask whether a principled parameter-free
representation with inspiration from physics is able to achieve the same goal.
We discover that the wavelet scattering transform can replace the complex
V1-cortex and simple uniform Gaussian noise can take the role of neural
stochasticity, to achieve adversarial robustness. In extensive experiments on
the CIFAR-10 benchmark with adaptive adversarial attacks we show that: 1)
Robustness of VOneBlock architectures is relatively weak (though non-zero) when
the strength of the adversarial attack radius is set to commonly used
benchmarks. 2) Replacing the front-end VOneBlock by an off-the-shelf
parameter-free Scatternet followed by simple uniform Gaussian noise can achieve
much more substantial adversarial robustness without adversarial training. Our
work shows how physically inspired structures yield new insights into
robustness that were previously only thought possible by meticulously mimicking
the human cortex.
Related papers
- Finite Gaussian Neurons: Defending against adversarial attacks by making
neural networks say "I don't know" [0.0]
I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture for artificial neural networks.
My works aims to: - easily convert existing models to FGN architecture, - while preserving the existing model's behavior on real data, - and offering resistance against adversarial attacks.
arXiv Detail & Related papers (2023-06-13T14:17:25Z) - Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial
Defense [52.66971714830943]
Masked image modeling (MIM) has made it a prevailing framework for self-supervised visual representation learning.
In this paper, we investigate how this powerful self-supervised learning paradigm can provide adversarial robustness to downstream classifiers.
We propose an adversarial defense method, referred to as De3, by exploiting the pretrained decoder for denoising.
arXiv Detail & Related papers (2023-02-02T12:37:24Z) - Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons [0.6899744489931016]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
arXiv Detail & Related papers (2022-01-31T14:34:07Z) - Adversarial Attacks on Spiking Convolutional Networks for Event-based
Vision [0.6999740786886537]
We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data.
We also verify, for the first time, the effectiveness of these perturbations directly on neuromorphic hardware.
arXiv Detail & Related papers (2021-10-06T17:20:05Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - On sparse connectivity, adversarial robustness, and a novel model of the
artificial neuron [6.09170287691728]
We present a novel model of an artificial neuron, a "strong neuron," with low hardware requirements and inherent robustness against adversarial perturbations.
We demonstrate the feasibility of our approach through experiments on SVHN and GTSRB benchmarks.
We also proved that constituent blocks of our strong neuron are the only activation functions with perfect stability against adversarial attacks.
arXiv Detail & Related papers (2020-06-16T20:45:08Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.