Enhancing Robustness of Neural Networks through Fourier Stabilization
- URL: http://arxiv.org/abs/2106.04435v1
- Date: Tue, 8 Jun 2021 15:12:31 GMT
- Title: Enhancing Robustness of Neural Networks through Fourier Stabilization
- Authors: Netanel Raviv, Aidan Kelley, Michael Guo, Yevgeny Vorobeychik
- Abstract summary: We propose a novel approach, emphFourier stabilization, for designing evasion-robust neural networks with binary inputs.
We experimentally demonstrate the effectiveness of the proposed approach in boosting neural networks in several detection settings.
- Score: 18.409463838775558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the considerable success of neural networks in security settings such
as malware detection, such models have proved vulnerable to evasion attacks, in
which attackers make slight changes to inputs (e.g., malware) to bypass
detection. We propose a novel approach, \emph{Fourier stabilization}, for
designing evasion-robust neural networks with binary inputs. This approach,
which is complementary to other forms of defense, replaces the weights of
individual neurons with robust analogs derived using Fourier analytic tools.
The choice of which neurons to stabilize in a neural network is then a
combinatorial optimization problem, and we propose several methods for
approximately solving it. We provide a formal bound on the per-neuron drop in
accuracy due to Fourier stabilization, and experimentally demonstrate the
effectiveness of the proposed approach in boosting robustness of neural
networks in several detection settings. Moreover, we show that our approach
effectively composes with adversarial training.
Related papers
- Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - On the Robustness and Anomaly Detection of Sparse Neural Networks [28.832060124537843]
We show that sparsity can make networks more robust and better anomaly detectors.
We also show that structured sparsity greatly helps in reducing the complexity of expensive robustness and detection methods.
We introduce a new method, SensNorm, which uses the sensitivity of weights derived from an appropriate pruning method to detect anomalous samples.
arXiv Detail & Related papers (2022-07-09T09:03:52Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Stable, accurate and efficient deep neural networks for inverse problems
with analysis-sparse models [2.969705152497174]
We present a novel construction of an accurate, stable and efficient neural network for inverse problems with general analysis-sparse models.
To construct the network, we unroll NESTA, an accelerated first-order method for convex optimization.
A restart scheme is employed to enable exponential decay of the required network depth, yielding a shallower, and consequently more efficient, network.
arXiv Detail & Related papers (2022-03-02T00:44:25Z) - A Layer-wise Adversarial-aware Quantization Optimization for Improving
Robustness [4.794745827538956]
We find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models.
We propose a layer-wise adversarial-aware quantization method, using the Lipschitz constant to choose the best quantization parameter settings for a neural network.
Experiment results show that our method can effectively and efficiently improve the robustness of quantized adversarially-trained neural networks.
arXiv Detail & Related papers (2021-10-23T22:11:30Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Stable Neural Flows [15.318500611972441]
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
arXiv Detail & Related papers (2020-03-18T06:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.