Finite Gaussian Neurons: Defending against adversarial attacks by making
neural networks say "I don't know"
- URL: http://arxiv.org/abs/2306.07796v1
- Date: Tue, 13 Jun 2023 14:17:25 GMT
- Title: Finite Gaussian Neurons: Defending against adversarial attacks by making
neural networks say "I don't know"
- Authors: Felix Grezes
- Abstract summary: I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture for artificial neural networks.
My works aims to: - easily convert existing models to FGN architecture, - while preserving the existing model's behavior on real data, - and offering resistance against adversarial attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since 2014, artificial neural networks have been known to be vulnerable to
adversarial attacks, which can fool the network into producing wrong or
nonsensical outputs by making humanly imperceptible alterations to inputs.
While defenses against adversarial attacks have been proposed, they usually
involve retraining a new neural network from scratch, a costly task. In this
work, I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture
for artificial neural networks. My works aims to: - easily convert existing
models to Finite Gaussian Neuron architecture, - while preserving the existing
model's behavior on real data, - and offering resistance against adversarial
attacks. I show that converted and retrained Finite Gaussian Neural Networks
(FGNN) always have lower confidence (i.e., are not overconfident) in their
predictions over randomized and Fast Gradient Sign Method adversarial images
when compared to classical neural networks, while maintaining high accuracy and
confidence over real MNIST images. To further validate the capacity of Finite
Gaussian Neurons to protect from adversarial attacks, I compare the behavior of
FGNs to that of Bayesian Neural Networks against both randomized and
adversarial images, and show how the behavior of the two architectures differs.
Finally I show some limitations of the FGN models by testing them on the more
complex SPEECHCOMMANDS task, against the stronger Carlini-Wagner and Projected
Gradient Descent adversarial attacks.
Related papers
- Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Adversarial Defense via Neural Oscillation inspired Gradient Masking [0.0]
Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility.
We propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs.
arXiv Detail & Related papers (2022-11-04T02:13:19Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons [0.6899744489931016]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
arXiv Detail & Related papers (2022-01-31T14:34:07Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Towards Natural Robustness Against Adversarial Examples [35.5696648642793]
We show that a new family of deep neural networks called Neural ODEs holds a weaker upper bound.
This weaker upper bound prevents the amount of change in the result from being too large.
We show that the natural robustness of Neural ODEs is even better than the robustness of neural networks that are trained with adversarial training methods.
arXiv Detail & Related papers (2020-12-04T08:12:38Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.