Hardening DNNs against Transfer Attacks during Network Compression using
Greedy Adversarial Pruning
- URL: http://arxiv.org/abs/2206.07406v1
- Date: Wed, 15 Jun 2022 09:13:35 GMT
- Title: Hardening DNNs against Transfer Attacks during Network Compression using
Greedy Adversarial Pruning
- Authors: Jonah O'Brien Weiss, Tiago Alves, Sandip Kundu
- Abstract summary: We investigate the adversarial robustness of models produced by several irregular pruning schemes and by 8-bit quantization.
We find that this pruning method results in models that are resistant to transfer attacks from their uncompressed counterparts.
- Score: 0.1529342790344802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevalence and success of Deep Neural Network (DNN) applications in
recent years have motivated research on DNN compression, such as pruning and
quantization. These techniques accelerate model inference, reduce power
consumption, and reduce the size and complexity of the hardware necessary to
run DNNs, all with little to no loss in accuracy. However, since DNNs are
vulnerable to adversarial inputs, it is important to consider the relationship
between compression and adversarial robustness. In this work, we investigate
the adversarial robustness of models produced by several irregular pruning
schemes and by 8-bit quantization. Additionally, while conventional pruning
removes the least important parameters in a DNN, we investigate the effect of
an unconventional pruning method: removing the most important model parameters
based on the gradient on adversarial inputs. We call this method Greedy
Adversarial Pruning (GAP) and we find that this pruning method results in
models that are resistant to transfer attacks from their uncompressed
counterparts.
Related papers
- A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Benchmarking Adversarial Robustness of Compressed Deep Learning Models [15.737988622271219]
This study seeks to understand the effect of adversarial inputs crafted for base models on their pruned versions.
Our findings reveal that while the benefits of pruning enhanced generalizability, compression, and faster inference times are preserved, adversarial robustness remains comparable to the base model.
arXiv Detail & Related papers (2023-08-16T06:06:56Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - DiverGet: A Search-Based Software Testing Approach for Deep Neural
Network Quantization Assessment [10.18462284491991]
Quantization is one of the most applied Deep Neural Network (DNN) compression strategies.
We present DiverGet, a search-based testing framework for quantization assessment.
We evaluate the performance of DiverGet on state-of-the-art DNNs applied to hyperspectral remote sensing images.
arXiv Detail & Related papers (2022-07-13T15:27:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - An Integrated Approach to Produce Robust Models with High Efficiency [9.476463361600828]
Quantization and structure simplification are promising ways to adapt Deep Neural Networks (DNNs) to mobile devices.
In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble.
We design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
arXiv Detail & Related papers (2020-08-31T00:44:59Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Adversarial Attack on Deep Product Quantization Network for Image
Retrieval [74.85736968193879]
Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks.
Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations.
We propose product quantization adversarial generation (PQ-AG) to generate adversarial examples for product quantization based retrieval systems.
arXiv Detail & Related papers (2020-02-26T09:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.