Do Neural Networks Compress Manifolds Optimally?
- URL: http://arxiv.org/abs/2205.08518v1
- Date: Tue, 17 May 2022 17:41:53 GMT
- Title: Do Neural Networks Compress Manifolds Optimally?
- Authors: Sourbh Bhadane, Aaron B. Wagner, Johannes Ball\'e
- Abstract summary: Artificial Neural-Network-based (ANN-based) lossy compressors have recently obtained striking results on several sources.
We show that state-of-the-art ANN-based compressors fail to optimally compress the sources, especially at high rates.
- Score: 22.90338354582811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Neural-Network-based (ANN-based) lossy compressors have recently
obtained striking results on several sources. Their success may be ascribed to
an ability to identify the structure of low-dimensional manifolds in
high-dimensional ambient spaces. Indeed, prior work has shown that ANN-based
compressors can achieve the optimal entropy-distortion curve for some such
sources. In contrast, we determine the optimal entropy-distortion tradeoffs for
two low-dimensional manifolds with circular structure and show that
state-of-the-art ANN-based compressors fail to optimally compress the sources,
especially at high rates.
Related papers
- Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff [29.69773024077467]
Recent efforts in neural compression have focused on the rate-distortion-perception tradeoff.
In this paper, we propose neural compressors that are low complexity and benefit from high packing efficiency.
arXiv Detail & Related papers (2025-03-21T22:18:52Z) - Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks [5.582683296425384]
Deep neural networks have achieved state-of-the-art performance across numerous applications.
Low-rank approximation techniques offer a promising solution by reducing the size and complexity of these networks.
We develop an analytical framework for data-driven post-training low-rank compression.
arXiv Detail & Related papers (2025-02-04T23:10:13Z) - Diff-PCC: Diffusion-based Neural Compression for 3D Point Clouds [12.45444994957525]
We introduce the first diffusion-based point cloud compression method, dubbed Diff-PCC, to leverage the expressive power of the diffusion model for generative and aesthetically superior decoding.
Experiments demonstrate that the proposed Diff-PCC achieves state-of-the-art compression performance.
arXiv Detail & Related papers (2024-08-20T04:55:29Z) - SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression
with Super-resolution Neural Networks [13.706955134941385]
We propose SRN-SZ, a deep learning-based scientific error-bounded lossy compressor.
SRN-SZ applies the most advanced super-resolution network HAT for its compression.
In experiments, SRN-SZ achieves up to 75% compression ratio improvements under the same error bound.
arXiv Detail & Related papers (2023-09-07T22:15:32Z) - Low-rank Tensor Decomposition for Compression of Convolutional Neural
Networks Using Funnel Regularization [1.8579693774597708]
We propose a model reduction method to compress the pre-trained networks using low-rank tensor decomposition.
A new regularization method, called funnel function, is proposed to suppress the unimportant factors during the compression.
For ResNet18 with ImageNet2012, our reduced model can reach more than twi times speed up in terms of GMAC with merely 0.7% Top-1 accuracy drop.
arXiv Detail & Related papers (2021-12-07T13:41:51Z) - Out-of-Distribution Robustness in Deep Learning Compression [28.049124970993056]
Deep neural network (DNN) compression systems have proved to be highly effective for designing source codes for many natural sources.
These systems suffer from vulnerabilities to distribution shifts as well as out-of-distribution (OOD) data, which reduces their real-world applications.
We propose algorithmic and architectural frameworks built on two principled methods: one that trains DNN compressors using distributionally-robust optimization (DRO) and the other which uses a structured latent code.
arXiv Detail & Related papers (2021-10-13T19:54:07Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - Neural Network Compression Via Sparse Optimization [23.184290795230897]
We propose a model compression framework based on the recent progress on sparse optimization.
We achieve up to 7.2 and 2.9 times FLOPs reduction with the same level of evaluation of accuracy on VGG16 for CIFAR10 and ResNet50 for ImageNet.
arXiv Detail & Related papers (2020-11-10T03:03:55Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z) - GAN Slimming: All-in-One GAN Compression by A Unified Optimization
Framework [94.26938614206689]
We propose the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming.
We apply GS to compress CartoonGAN, a state-of-the-art style transfer network, by up to 47 times, with minimal visual quality degradation.
arXiv Detail & Related papers (2020-08-25T14:39:42Z) - On Biased Compression for Distributed Learning [55.89300593805943]
We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings.
We propose several new biased compressors with promising theoretical guarantees and practical performance.
arXiv Detail & Related papers (2020-02-27T19:52:24Z) - Structured Sparsification with Joint Optimization of Group Convolution
and Channel Shuffle [117.95823660228537]
We propose a novel structured sparsification method for efficient network compression.
The proposed method automatically induces structured sparsity on the convolutional weights.
We also address the problem of inter-group communication with a learnable channel shuffle mechanism.
arXiv Detail & Related papers (2020-02-19T12:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.