A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of
DNNs
- URL: http://arxiv.org/abs/2011.03083v2
- Date: Tue, 24 Nov 2020 17:40:52 GMT
- Title: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of
DNNs
- Authors: Souvik Kundu, Mahdi Nazemi, Peter A. Beerel, Massoud Pedram
- Abstract summary: We present a dynamic network rewiring (DNR) method to generate pruned deep neural network (DNN) models that are robust against adversarial attacks.
Our experiments show that DNR consistently finds compressed models with better clean and adversarial image classification performance than what is achievable through state-of-the-art alternatives.
- Score: 8.597091257152567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a dynamic network rewiring (DNR) method to generate
pruned deep neural network (DNN) models that are robust against adversarial
attacks yet maintain high accuracy on clean images. In particular, the
disclosed DNR method is based on a unified constrained optimization formulation
using a hybrid loss function that merges ultra-high model compression with
robust adversarial training. This training strategy dynamically adjusts
inter-layer connectivity based on per-layer normalized momentum computed from
the hybrid loss function. In contrast to existing robust pruning frameworks
that require multiple training iterations, the proposed learning strategy
achieves an overall target pruning ratio with only a single training iteration
and can be tuned to support both irregular and structured channel pruning. To
evaluate the merits of DNR, experiments were performed with two widely accepted
models, namely VGG16 and ResNet-18, on CIFAR-10, CIFAR-100 as well as with
VGG16 on Tiny-ImageNet. Compared to the baseline uncompressed models, DNR
provides over20x compression on all the datasets with no significant drop in
either clean or adversarial classification accuracy. Moreover, our experiments
show that DNR consistently finds compressed models with better clean and
adversarial image classification performance than what is achievable through
state-of-the-art alternatives.
Related papers
- Robust low-rank training via approximate orthonormal constraints [2.519906683279153]
We introduce a robust low-rank training algorithm that maintains the network's weights on the low-rank matrix manifold.
The resulting model reduces both training and inference costs while ensuring well-conditioning and thus better adversarial robustness, without compromising model accuracy.
arXiv Detail & Related papers (2023-06-02T12:22:35Z) - Learning Robust Kernel Ensembles with Kernel Average Pooling [3.6540368812166872]
We introduce Kernel Average Pooling (KAP), a neural network building block that applies the mean filter along the kernel dimension of the layer activation tensor.
We show that ensembles of kernels with similar functionality naturally emerge in convolutional neural networks equipped with KAP and trained with backpropagation.
arXiv Detail & Related papers (2022-09-30T19:49:14Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Compression-aware Training of Neural Networks using Frank-Wolfe [27.69586583737247]
We propose a framework that encourages convergence to well-performing solutions while inducing robustness towards filter pruning and low-rank matrix decomposition.
Our method is able to outperform existing compression-aware approaches and, in the case of low-rank matrix decomposition, it also requires significantly less computational resources than approaches based on nuclear-norm regularization.
arXiv Detail & Related papers (2022-05-24T09:29:02Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Towards Efficient Tensor Decomposition-Based DNN Model Compression with
Optimization Framework [14.27609385208807]
We propose a systematic framework for tensor decomposition-based model compression using Alternating Direction Method of Multipliers (ADMM)
Our framework is very general, and it works for both CNNs and RNNs.
Experimental results show that our ADMM-based TT-format models demonstrate very high compression performance with high accuracy.
arXiv Detail & Related papers (2021-07-26T18:31:33Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.