Neural Estimation of the Rate-Distortion Function With Applications to
Operational Source Coding
- URL: http://arxiv.org/abs/2204.01612v1
- Date: Mon, 4 Apr 2022 16:06:40 GMT
- Title: Neural Estimation of the Rate-Distortion Function With Applications to
Operational Source Coding
- Authors: Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti
- Abstract summary: A fundamental question in designing lossy data compression schemes is how well one can do in comparison with the rate-distortion function.
We investigate methods to estimate the rate-distortion function on large, real-world data.
We apply the resulting rate-distortion estimator, called NERD, on popular image datasets.
- Score: 25.59334941818991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental question in designing lossy data compression schemes is how
well one can do in comparison with the rate-distortion function, which
describes the known theoretical limits of lossy compression. Motivated by the
empirical success of deep neural network (DNN) compressors on large, real-world
data, we investigate methods to estimate the rate-distortion function on such
data, which would allow comparison of DNN compressors with optimality. While
one could use the empirical distribution of the data and apply the
Blahut-Arimoto algorithm, this approach presents several computational
challenges and inaccuracies when the datasets are large and high-dimensional,
such as the case of modern image datasets. Instead, we re-formulate the
rate-distortion objective, and solve the resulting functional optimization
problem using neural networks. We apply the resulting rate-distortion
estimator, called NERD, on popular image datasets, and provide evidence that
NERD can accurately estimate the rate-distortion function. Using our estimate,
we show that the rate-distortion achievable by DNN compressors are within
several bits of the rate-distortion function for real-world datasets.
Additionally, NERD provides access to the rate-distortion achieving channel, as
well as samples from its output marginal. Therefore, using recent results in
reverse channel coding, we describe how NERD can be used to construct an
operational one-shot lossy compression scheme with guarantees on the achievable
rate and distortion. Experimental results demonstrate competitive performance
with DNN compressors.
Related papers
- Reduced storage direct tensor ring decomposition for convolutional neural networks compression [0.0]
We propose a novel low-rank CNNs compression method based on reduced storage direct tensor ring decomposition (RSDTR)
The proposed method offers a higher circular mode permutation flexibility, and it is characterized by large parameter and FLOPS compression rates.
Experiments, performed on the CIFAR-10 and ImageNet datasets, clearly demonstrate the efficiency of RSDTR in comparison to other state-of-the-art CNNs compression approaches.
arXiv Detail & Related papers (2024-05-17T14:16:40Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Universal Rate-Distortion-Perception Representations for Lossy
Compression [31.28856752892628]
We consider the notion of universal representations in which one may fix an encoder and vary the decoder to achieve any point within a collection of distortion and perception constraints.
We prove that the corresponding information-theoretic universal rate-distortion-perception is operationally achievable in an approximate sense.
arXiv Detail & Related papers (2021-06-18T18:52:08Z) - Orthogonal Features-based EEG Signal Denoising using Fractionally
Compressed AutoEncoder [16.889633963766858]
A fractional-based compressed auto-encoder architecture has been introduced to solve the problem of denoising electroencephalogram (EEG) signals.
The proposed architecture provides improved denoising results on the standard datasets when compared with the existing methods.
arXiv Detail & Related papers (2021-02-16T11:15:00Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.