Low-Rank+Sparse Tensor Compression for Neural Networks
- URL: http://arxiv.org/abs/2111.01697v1
- Date: Tue, 2 Nov 2021 15:55:07 GMT
- Title: Low-Rank+Sparse Tensor Compression for Neural Networks
- Authors: Cole Hawkins, Haichuan Yang, Meng Li, Liangzhen Lai, Vikas Chandra
- Abstract summary: We propose to combine low-rank tensor decomposition with sparse pruning in order to take advantage of both coarse and fine structure for compression.
We compress weights in SOTA architectures (MobileNetv3, EfficientNet, Vision Transformer) and compare this approach to sparse pruning and tensor decomposition alone.
- Score: 11.632913694957868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-rank tensor compression has been proposed as a promising approach to
reduce the memory and compute requirements of neural networks for their
deployment on edge devices. Tensor compression reduces the number of parameters
required to represent a neural network weight by assuming network weights
possess a coarse higher-order structure. This coarse structure assumption has
been applied to compress large neural networks such as VGG and ResNet. However
modern state-of-the-art neural networks for computer vision tasks (i.e.
MobileNet, EfficientNet) already assume a coarse factorized structure through
depthwise separable convolutions, making pure tensor decomposition a less
attractive approach. We propose to combine low-rank tensor decomposition with
sparse pruning in order to take advantage of both coarse and fine structure for
compression. We compress weights in SOTA architectures (MobileNetv3,
EfficientNet, Vision Transformer) and compare this approach to sparse pruning
and tensor decomposition alone.
Related papers
- "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - Tensor Decomposition for Model Reduction in Neural Networks: A Review [13.96938227911258]
Modern neural networks have revolutionized the fields of computer vision (CV) and Natural Language Processing (NLP)
They are widely used for solving complex CV tasks and NLP tasks such as image classification, image generation, and machine translation.
This paper reviews six tensor decomposition methods and illustrates their ability to compress model parameters.
arXiv Detail & Related papers (2023-04-26T13:12:00Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - A Theoretical Understanding of Neural Network Compression from Sparse
Linear Approximation [37.525277809849776]
The goal of model compression is to reduce the size of a large neural network while retaining a comparable performance.
We use sparsity-sensitive $ell_q$-norm to characterize compressibility and provide a relationship between soft sparsity of the weights in the network and the degree of compression.
We also develop adaptive algorithms for pruning each neuron in the network informed by our theory.
arXiv Detail & Related papers (2022-06-11T20:10:35Z) - Fast Conditional Network Compression Using Bayesian HyperNetworks [54.06346724244786]
We introduce a conditional compression problem and propose a fast framework for tackling it.
The problem is how to quickly compress a pretrained large neural network into optimal smaller networks given target contexts.
Our methods can quickly generate compressed networks with significantly smaller sizes than baseline methods.
arXiv Detail & Related papers (2022-05-13T00:28:35Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - Convolutional neural networks compression with low rank and sparse
tensor decompositions [0.0]
Convolutional neural networks show outstanding results in a variety of computer vision tasks.
For some real-world applications, it is crucial to develop models, which can be fast and light enough to run on edge systems and mobile devices.
In this work, we consider a neural network compression method based on tensor decompositions.
arXiv Detail & Related papers (2020-06-11T13:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.