On the Robustness and Anomaly Detection of Sparse Neural Networks
- URL: http://arxiv.org/abs/2207.04227v1
- Date: Sat, 9 Jul 2022 09:03:52 GMT
- Title: On the Robustness and Anomaly Detection of Sparse Neural Networks
- Authors: Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Z\"ugner,
Simon Geisler, Stephan G\"unnemann
- Abstract summary: We show that sparsity can make networks more robust and better anomaly detectors.
We also show that structured sparsity greatly helps in reducing the complexity of expensive robustness and detection methods.
We introduce a new method, SensNorm, which uses the sensitivity of weights derived from an appropriate pruning method to detect anomalous samples.
- Score: 28.832060124537843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness and anomaly detection capability of neural networks are
crucial topics for their safe adoption in the real-world. Moreover, the
over-parameterization of recent networks comes with high computational costs
and raises questions about its influence on robustness and anomaly detection.
In this work, we show that sparsity can make networks more robust and better
anomaly detectors. To motivate this even further, we show that a pre-trained
neural network contains, within its parameter space, sparse subnetworks that
are better at these tasks without any further training. We also show that
structured sparsity greatly helps in reducing the complexity of expensive
robustness and detection methods, while maintaining or even improving their
results on these tasks. Finally, we introduce a new method, SensNorm, which
uses the sensitivity of weights derived from an appropriate pruning method to
detect anomalous samples in the input.
Related papers
- Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks [23.01933325606068]
Existing complete verification techniques offer provable guarantees for all robustness queries but struggle to scale beyond small neural networks.
We propose a novel parameter search method to improve the quality of these linear approximations.
Specifically, we show that using a simple search method, carefully adapted to the given verification problem through state-of-the-art algorithm configuration techniques, improves the average global lower bound by 25% on average over the current state of the art.
arXiv Detail & Related papers (2024-06-14T16:16:26Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Enhancing Robustness of Neural Networks through Fourier Stabilization [18.409463838775558]
We propose a novel approach, emphFourier stabilization, for designing evasion-robust neural networks with binary inputs.
We experimentally demonstrate the effectiveness of the proposed approach in boosting neural networks in several detection settings.
arXiv Detail & Related papers (2021-06-08T15:12:31Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Towards Robust Neural Networks via Close-loop Control [12.71446168207573]
Deep neural networks are vulnerable to various perturbations due to their black-box nature.
Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount.
arXiv Detail & Related papers (2021-02-03T03:50:35Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on
Attributed Networks [10.745544780660165]
Residual Graph Convolutional Network (ResGCN) is an attention-based deep residual modeling approach.
We show that ResGCN can effectively detect anomalous nodes in attributed networks.
arXiv Detail & Related papers (2020-09-30T15:24:51Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.