Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff
- URL: http://arxiv.org/abs/2503.17558v1
- Date: Fri, 21 Mar 2025 22:18:52 GMT
- Title: Optimal Neural Compressors for the Rate-Distortion-Perception Tradeoff
- Authors: Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti,
- Abstract summary: Recent efforts in neural compression have focused on the rate-distortion-perception tradeoff.<n>In this paper, we propose neural compressors that are low complexity and benefit from high packing efficiency.
- Score: 29.69773024077467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent efforts in neural compression have focused on the rate-distortion-perception (RDP) tradeoff, where the perception constraint ensures the source and reconstruction distributions are close in terms of a statistical divergence. Theoretical work on RDP describes interesting properties of RDP-optimal compressors without providing constructive and low complexity solutions. While classical rate distortion theory shows that optimal compressors should efficiently pack the space, RDP theory additionally shows that infinite randomness shared between the encoder and decoder may be necessary for RDP optimality. In this paper, we propose neural compressors that are low complexity and benefit from high packing efficiency through lattice coding and shared randomness through shared dithering over the lattice cells. For two important settings, namely infinite shared and zero shared randomness, we analyze the rate, distortion, and perception achieved by our proposed neural compressors and further show optimality in the presence of infinite shared randomness. Experimentally, we investigate the roles these two components of our design, lattice coding and randomness, play in the performance of neural compressors on synthetic and real-world data. We observe that performance improves with more shared randomness and better lattice packing.
Related papers
- SPC-NeRF: Spatial Predictive Compression for Voxel Based Radiance Field [41.33347056627581]
We propose SPC-NeRF, a novel framework applying spatial predictive coding in EVG compression.
Our method can achieve 32% bit saving compared to the state-of-the-art method VQRF.
arXiv Detail & Related papers (2024-02-26T07:40:45Z) - Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression [39.65082601416051]
Communication compression is an essential strategy for alleviating communication overhead.<n>We propose NEOLITHIC, a nearly optimal algorithm for compression under mild conditions.
arXiv Detail & Related papers (2023-05-12T17:02:43Z) - Federated Random Reshuffling with Compression and Variance Reduction [0.0]
Random Reshuffling (RR) is an immensely popular method for training supervised machine learning models via empirical risk minimization.
It is embedded and often set as default in standard machine learning software.
We introduce three new algorithms to improve FedRR further: one for taming the variance coming from shuffling and the other for taming the variance due to compression.
arXiv Detail & Related papers (2022-05-08T16:46:11Z) - Revisiting Discriminator in GAN Compression: A Generator-discriminator
Cooperative Compression Scheme [65.5405625485559]
GAN compression aims to reduce tremendous computational overhead and memory usages when deploying GANs on resource-constrained edge devices.
In this work, we revisit the role of discriminator in GAN compression and design a novel generator-discriminator cooperative compression scheme for GAN compression, termed GCC.
arXiv Detail & Related papers (2021-10-27T13:54:55Z) - Out-of-Distribution Robustness in Deep Learning Compression [28.049124970993056]
Deep neural network (DNN) compression systems have proved to be highly effective for designing source codes for many natural sources.
These systems suffer from vulnerabilities to distribution shifts as well as out-of-distribution (OOD) data, which reduces their real-world applications.
We propose algorithmic and architectural frameworks built on two principled methods: one that trains DNN compressors using distributionally-robust optimization (DRO) and the other which uses a structured latent code.
arXiv Detail & Related papers (2021-10-13T19:54:07Z) - Permutation Compressors for Provably Faster Distributed Nonconvex
Optimization [68.8204255655161]
We show that the MARINA method of Gorbunov et al (2021) can be considered as a state-of-the-art method in terms of theoretical communication complexity.
Theory of MARINA to support the theory of potentially em correlated compressors, extends to the method beyond the classical independent compressors setting.
arXiv Detail & Related papers (2021-10-07T09:38:15Z) - Innovation Compression for Communication-efficient Distributed
Optimization with Linear Convergence [23.849813231750932]
This paper proposes a communication-efficient linearly convergent distributed (COLD) algorithm to solve strongly convex optimization problems.
By compressing innovation vectors, COLD is able to achieve linear convergence for a class of $delta$-contracted compressors.
Numerical experiments demonstrate the advantages of both algorithms under different compressors.
arXiv Detail & Related papers (2021-05-14T08:15:18Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - On Biased Compression for Distributed Learning [55.89300593805943]
We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings.
We propose several new biased compressors with promising theoretical guarantees and practical performance.
arXiv Detail & Related papers (2020-02-27T19:52:24Z) - Structured Sparsification with Joint Optimization of Group Convolution
and Channel Shuffle [117.95823660228537]
We propose a novel structured sparsification method for efficient network compression.
The proposed method automatically induces structured sparsity on the convolutional weights.
We also address the problem of inter-group communication with a learnable channel shuffle mechanism.
arXiv Detail & Related papers (2020-02-19T12:03:10Z) - End-to-End Facial Deep Learning Feature Compression with Teacher-Student
Enhancement [57.18801093608717]
We propose a novel end-to-end feature compression scheme by leveraging the representation and learning capability of deep neural networks.
In particular, the extracted features are compactly coded in an end-to-end manner by optimizing the rate-distortion cost.
We verify the effectiveness of the proposed model with the facial feature, and experimental results reveal better compression performance in terms of rate-accuracy.
arXiv Detail & Related papers (2020-02-10T10:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.