Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural
Network Pruning
- URL: http://arxiv.org/abs/2304.04120v1
- Date: Sat, 8 Apr 2023 22:48:30 GMT
- Title: Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural
Network Pruning
- Authors: Shanglin Zhou, Mikhail A. Bragin, Lynn Pepin, Deniz Gurevin, Fei Miao,
Caiwen Ding
- Abstract summary: Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks.
In this paper, we develop a systematic weight-pruning optimization approach based on Surrogate Lagrangian relaxation.
- Score: 9.33753001494221
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Network pruning is a widely used technique to reduce computation cost and
model size for deep neural networks. However, the typical three-stage pipeline
significantly increases the overall training time. In this paper, we develop a
systematic weight-pruning optimization approach based on Surrogate Lagrangian
relaxation, which is tailored to overcome difficulties caused by the discrete
nature of the weight-pruning problem. We prove that our method ensures fast
convergence of the model compression problem, and the convergence of the SLR is
accelerated by using quadratic penalties. Model parameters obtained by SLR
during the training phase are much closer to their optimal values as compared
to those obtained by other state-of-the-art methods. We evaluate our method on
image classification tasks using CIFAR-10 and ImageNet with state-of-the-art
MLP-Mixer, Swin Transformer, and VGG-16, ResNet-18, ResNet-50 and ResNet-110,
MobileNetV2. We also evaluate object detection and segmentation tasks on COCO,
KITTI benchmark, and TuSimple lane detection dataset using a variety of models.
Experimental results demonstrate that our SLR-based weight-pruning optimization
approach achieves a higher compression rate than state-of-the-art methods under
the same accuracy requirement and also can achieve higher accuracy under the
same compression rate requirement. Under classification tasks, our SLR approach
converges to the desired accuracy $3\times$ faster on both of the datasets.
Under object detection and segmentation tasks, SLR also converges $2\times$
faster to the desired accuracy. Further, our SLR achieves high model accuracy
even at the hard-pruning stage without retraining, which reduces the
traditional three-stage pruning into a two-stage process. Given a limited
budget of retraining epochs, our approach quickly recovers the model's
accuracy.
Related papers
- A-SDM: Accelerating Stable Diffusion through Redundancy Removal and
Performance Optimization [54.113083217869516]
In this work, we first explore the computational redundancy part of the network.
We then prune the redundancy blocks of the model and maintain the network performance.
Thirdly, we propose a global-regional interactive (GRI) attention to speed up the computationally intensive attention part.
arXiv Detail & Related papers (2023-12-24T15:37:47Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - A Fast and Efficient Conditional Learning for Tunable Trade-Off between
Accuracy and Robustness [11.35810118757863]
Existing models that achieve state-of-the-art (SOTA) performance on both clean and adversarially-perturbed images rely on convolution operations conditioned with feature-wise linear modulation (FiLM) layers.
We present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that requires no additional layer.
In particular, we add scaled noise to the weight tensors that enables a trade-off between clean and adversarial performance.
arXiv Detail & Related papers (2022-03-28T19:25:36Z) - FasterPose: A Faster Simple Baseline for Human Pose Estimation [65.8413964785972]
We propose a design paradigm for cost-effective network with LR representation for efficient pose estimation, named FasterPose.
We study the training behavior of FasterPose, and formulate a novel regressive cross-entropy (RCE) loss function for accelerating the convergence.
Compared with the previously dominant network of pose estimation, our method reduces 58% of the FLOPs and simultaneously gains 1.3% improvement of accuracy.
arXiv Detail & Related papers (2021-07-07T13:39:08Z) - Enabling Retrain-free Deep Neural Network Pruning using Surrogate
Lagrangian Relaxation [2.691929135895278]
We develop a systematic weight-pruning optimization approach based on Surrogate Lagrangian relaxation ( SLR)
SLR achieves higher compression rate than state-of-the-arts under the same accuracy requirement.
Given a limited budget of retraining epochs, our approach quickly recovers the model accuracy.
arXiv Detail & Related papers (2020-12-18T07:17:30Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.