Towards Empirical Sandwich Bounds on the Rate-Distortion Function
- URL: http://arxiv.org/abs/2111.12166v1
- Date: Tue, 23 Nov 2021 21:50:27 GMT
- Title: Towards Empirical Sandwich Bounds on the Rate-Distortion Function
- Authors: Yibo Yang, Stephan Mandt
- Abstract summary: Rate-distortion (R-D) function, a key quantity in information theory, characterizes the fundamental limit of how much a data source can be compressed.
This paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i.i.d. data samples.
- Score: 33.21088274828794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rate-distortion (R-D) function, a key quantity in information theory,
characterizes the fundamental limit of how much a data source can be compressed
subject to a fidelity criterion, by any compression algorithm. As researchers
push for ever-improving compression performance, establishing the R-D function
of a given data source is not only of scientific interest, but also sheds light
on the possible room for improving compression algorithms. Previous work on
this problem relied on distributional assumptions on the data source (Gibson,
2017) or only applied to discrete data. By contrast, this paper makes the first
attempt at an algorithm for sandwiching the R-D function of a general (not
necessarily discrete) source requiring only i.i.d. data samples. We estimate
R-D sandwich bounds on Gaussian and high-dimension banana-shaped sources, as
well as GAN-generated images. Our R-D upper bound on natural images indicates
room for improving the performance of state-of-the-art image compression
methods by 1 dB in PSNR at various bitrates.
Related papers
- Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Estimating the Rate-Distortion Function by Wasserstein Gradient Descent [36.24186820465207]
In the theory of lossy compression, the rate-distortion function $R(D)$ describes how much a data source can be compressed (in bit-rate) at any given level of fidelity (distortion)
We propose a new method to estimate $R(D)$ from the perspective of optimal transport.
arXiv Detail & Related papers (2023-10-29T05:29:59Z) - Image Compression and Decompression Framework Based on Latent Diffusion
Model for Breast Mammography [0.0]
This research presents a novel framework for the compression and decompression of medical images utilizing the Latent Diffusion Model (LDM)
The LDM represents advancement over the denoising diffusion probabilistic model (DDPM) with a potential to yield superior image quality.
A possible application of LDM and Torchvision for image upscaling has been explored using medical image data.
arXiv Detail & Related papers (2023-10-08T22:08:59Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Neural Estimation of the Rate-Distortion Function With Applications to
Operational Source Coding [25.59334941818991]
A fundamental question in designing lossy data compression schemes is how well one can do in comparison with the rate-distortion function.
We investigate methods to estimate the rate-distortion function on large, real-world data.
We apply the resulting rate-distortion estimator, called NERD, on popular image datasets.
arXiv Detail & Related papers (2022-04-04T16:06:40Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.