On sparse connectivity, adversarial robustness, and a novel model of the
artificial neuron
- URL: http://arxiv.org/abs/2006.09510v1
- Date: Tue, 16 Jun 2020 20:45:08 GMT
- Title: On sparse connectivity, adversarial robustness, and a novel model of the
artificial neuron
- Authors: Sergey Bochkanov
- Abstract summary: We present a novel model of an artificial neuron, a "strong neuron," with low hardware requirements and inherent robustness against adversarial perturbations.
We demonstrate the feasibility of our approach through experiments on SVHN and GTSRB benchmarks.
We also proved that constituent blocks of our strong neuron are the only activation functions with perfect stability against adversarial attacks.
- Score: 6.09170287691728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved human-level accuracy on almost all
perceptual benchmarks. It is interesting that these advances were made using
two ideas that are decades old: (a) an artificial neuron based on a linear
summator and (b) SGD training.
However, there are important metrics beyond accuracy: computational
efficiency and stability against adversarial perturbations. In this paper, we
propose two closely connected methods to improve these metrics on contour
recognition tasks: (a) a novel model of an artificial neuron, a "strong
neuron," with low hardware requirements and inherent robustness against
adversarial perturbations and (b) a novel constructive training algorithm that
generates sparse networks with $O(1)$ connections per neuron.
We demonstrate the feasibility of our approach through experiments on SVHN
and GTSRB benchmarks. We achieved an impressive 10x-100x reduction in
operations count (10x when compared with other sparsification approaches, 100x
when compared with dense networks) and a substantial reduction in hardware
requirements (8-bit fixed-point math was used) with no reduction in model
accuracy. Superior stability against adversarial perturbations (exceeding that
of adversarial training) was achieved without any counteradversarial measures,
relying on the robustness of strong neurons alone. We also proved that
constituent blocks of our strong neuron are the only activation functions with
perfect stability against adversarial attacks.
Related papers
- Decorrelating neurons using persistence [29.25969187808722]
We present two regularisation terms computed from the weights of a minimum spanning tree of a clique.
We demonstrate that naive minimisation of all correlations between neurons obtains lower accuracies than our regularisation terms.
We include a proof of differentiability of our regularisers, thus developing the first effective topological persistence-based regularisation terms.
arXiv Detail & Related papers (2023-08-09T11:09:14Z) - Wavelets Beat Monkeys at Adversarial Robustness [0.8702432681310401]
We show how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
arXiv Detail & Related papers (2023-04-19T03:41:30Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Improving Adversarial Transferability via Neuron Attribution-Based
Attacks [35.02147088207232]
We propose the Neuron-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations.
We derive an approximation scheme of neuron attribution to tremendously reduce the overhead.
Experiments confirm the superiority of our approach to the state-of-the-art benchmarks.
arXiv Detail & Related papers (2022-03-31T13:47:30Z) - Few-shot Backdoor Defense Using Shapley Estimation [123.56934991060788]
We develop a new approach called Shapley Pruning to mitigate backdoor attacks on deep neural networks.
ShapPruning identifies the few infected neurons (under 1% of all neurons) and manages to protect the model's structure and accuracy.
Experiments demonstrate the effectiveness and robustness of our method against various attacks and tasks.
arXiv Detail & Related papers (2021-12-30T02:27:03Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.