Fast Conditional Network Compression Using Bayesian HyperNetworks
- URL: http://arxiv.org/abs/2205.06404v1
- Date: Fri, 13 May 2022 00:28:35 GMT
- Title: Fast Conditional Network Compression Using Bayesian HyperNetworks
- Authors: Phuoc Nguyen, Truyen Tran, Ky Le, Sunil Gupta, Santu Rana, Dang
Nguyen, Trong Nguyen, Shannon Ryan, and Svetha Venkatesh
- Abstract summary: We introduce a conditional compression problem and propose a fast framework for tackling it.
The problem is how to quickly compress a pretrained large neural network into optimal smaller networks given target contexts.
Our methods can quickly generate compressed networks with significantly smaller sizes than baseline methods.
- Score: 54.06346724244786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a conditional compression problem and propose a fast framework
for tackling it. The problem is how to quickly compress a pretrained large
neural network into optimal smaller networks given target contexts, e.g. a
context involving only a subset of classes or a context where only limited
compute resource is available. To solve this, we propose an efficient Bayesian
framework to compress a given large network into much smaller size tailored to
meet each contextual requirement. We employ a hypernetwork to parameterize the
posterior distribution of weights given conditional inputs and minimize a
variational objective of this Bayesian neural network. To further reduce the
network sizes, we propose a new input-output group sparsity factorization of
weights to encourage more sparseness in the generated weights. Our methods can
quickly generate compressed networks with significantly smaller sizes than
baseline methods.
Related papers
- Traversing Between Modes in Function Space for Fast Ensembling [15.145136272169946]
"Bridge" is a lightweight network that takes minimal features from the original network and predicts outputs for the low-loss subspace without forward passes through the original network.
We empirically demonstrate that we can indeed train such bridge networks and significantly reduce inference costs with the help of bridge networks.
arXiv Detail & Related papers (2023-06-20T05:52:26Z) - A Theoretical Understanding of Neural Network Compression from Sparse
Linear Approximation [37.525277809849776]
The goal of model compression is to reduce the size of a large neural network while retaining a comparable performance.
We use sparsity-sensitive $ell_q$-norm to characterize compressibility and provide a relationship between soft sparsity of the weights in the network and the degree of compression.
We also develop adaptive algorithms for pruning each neuron in the network informed by our theory.
arXiv Detail & Related papers (2022-06-11T20:10:35Z) - Low-Rank+Sparse Tensor Compression for Neural Networks [11.632913694957868]
We propose to combine low-rank tensor decomposition with sparse pruning in order to take advantage of both coarse and fine structure for compression.
We compress weights in SOTA architectures (MobileNetv3, EfficientNet, Vision Transformer) and compare this approach to sparse pruning and tensor decomposition alone.
arXiv Detail & Related papers (2021-11-02T15:55:07Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Compressing Neural Networks: Towards Determining the Optimal Layer-wise
Decomposition [62.41259783906452]
We present a novel global compression framework for deep neural networks.
It automatically analyzes each layer to identify the optimal per-layer compression ratio.
Our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
arXiv Detail & Related papers (2021-07-23T20:01:30Z) - Synthesis and Pruning as a Dynamic Compression Strategy for Efficient
Deep Neural Networks [1.8275108630751844]
We propose a novel strategic synthesis algorithm for feedforward networks that draws directly from the brain's behaviours when learning.
Unlike existing approaches that advocate random selection, we select highly performing nodes as starting points for new edges.
The strategy aims only to produce useful connections and result in a smaller residual network structure.
arXiv Detail & Related papers (2020-11-23T12:30:57Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Structured Sparsification with Joint Optimization of Group Convolution
and Channel Shuffle [117.95823660228537]
We propose a novel structured sparsification method for efficient network compression.
The proposed method automatically induces structured sparsity on the convolutional weights.
We also address the problem of inter-group communication with a learnable channel shuffle mechanism.
arXiv Detail & Related papers (2020-02-19T12:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.