Output-Constrained Lossy Source Coding With Application to Rate-Distortion-Perception Theory
- URL: http://arxiv.org/abs/2403.14849v1
- Date: Thu, 21 Mar 2024 21:51:36 GMT
- Title: Output-Constrained Lossy Source Coding With Application to Rate-Distortion-Perception Theory
- Authors: Li Xie, Liangyan Li, Jun Chen, Zhongshan Zhang,
- Abstract summary: The distortion-rate function of output-constrained lossy source coding with limited common randomness is analyzed.
An explicit expression is obtained when both source and reconstruction distributions are Gaussian.
- Score: 9.464977414419332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The distortion-rate function of output-constrained lossy source coding with limited common randomness is analyzed for the special case of squared error distortion measure. An explicit expression is obtained when both source and reconstruction distributions are Gaussian. This further leads to a partial characterization of the information-theoretic limit of quadratic Gaussian rate-distortion-perception coding with the perception measure given by Kullback-Leibler divergence or squared quadratic Wasserstein distance.
Related papers
- Gaussian Rate-Distortion-Perception Coding and Entropy-Constrained Scalar Quantization [12.575809787716771]
This paper investigates the best known bounds on the quadratic Gaussian distortion-rate-perception function with limited common randomness.
The bounds are nondegenerate in the sense that they cannot be deduced from each other via a refined version of Talagrand's transportation inequality.
An improved lower bound is established when the perception measure is given by the squared Wasserstein-2 distance.
arXiv Detail & Related papers (2024-09-04T02:31:53Z) - The Rate-Distortion-Perception Trade-off: The Role of Private Randomness [53.81648040452621]
We show that private randomness is not useful if the compression rate is lower than the entropy of the source.
We characterize the corresponding rate-distortion trade-off and show that private randomness is not useful if the compression rate is lower than the entropy of the source.
arXiv Detail & Related papers (2024-04-01T13:36:01Z) - Rate-Distortion-Perception Tradeoff Based on the
Conditional-Distribution Perception Measure [33.084834042565895]
We study the rate-distortionperception (RDP) tradeoff for a memoryless source model in the limit of large blocklengths.
Our perception measure is based on a divergence between the distributions of the source and reconstruction sequences conditioned on the encoder output.
arXiv Detail & Related papers (2024-01-22T18:49:56Z) - On the Computation of the Gaussian Rate-Distortion-Perception Function [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for a multivariate Gaussian source under mean squared error (MSE) distortion.
We provide the associated algorithmic realization, as well as the convergence and the rate of convergence characterization.
We corroborate our results with numerical simulations and draw connections to existing results.
arXiv Detail & Related papers (2023-11-15T18:34:03Z) - Regularized Vector Quantization for Tokenized Image Synthesis [126.96880843754066]
Quantizing images into discrete representations has been a fundamental problem in unified generative modeling.
deterministic quantization suffers from severe codebook collapse and misalignment with inference stage while quantization suffers from low codebook utilization and reconstruction objective.
This paper presents a regularized vector quantization framework that allows to mitigate perturbed above issues effectively by applying regularization from two perspectives.
arXiv Detail & Related papers (2023-03-11T15:20:54Z) - Lossy Quantum Source Coding with a Global Error Criterion based on a
Posterior Reference Map [7.646713951724011]
We consider the lossy quantum source coding problem where the task is to compress a given quantum source below its von Neumann entropy.
Inspired by the duality connections between the rate-distortion and channel coding problems in the classical setting, we propose a new formulation for the problem.
arXiv Detail & Related papers (2023-02-01T17:44:40Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Generalization Bounds via Convex Analysis [12.411844611718958]
We show that it is possible to replace the mutual information by any strongly convex function of the joint input-output distribution.
Examples include bounds stated in terms of $p$-norm divergences and the Wasserstein-2 distance.
arXiv Detail & Related papers (2022-02-10T12:30:45Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - An Indirect Rate-Distortion Characterization for Semantic Sources:
General Model and the Case of Gaussian Observation [83.93224401261068]
Source model is motivated by the recent surge of interest in the semantic aspect of information.
intrinsic state corresponds to the semantic feature of the source, which in general is not observable.
Rate-distortion function is the semantic rate-distortion function of the source.
arXiv Detail & Related papers (2022-01-29T02:14:24Z) - The Accuracy vs. Sampling Overhead Trade-off in Quantum Error Mitigation
Using Monte Carlo-Based Channel Inversion [84.66087478797475]
Quantum error mitigation (QEM) is a class of promising techniques for reducing the computational error of variational quantum algorithms.
We consider a practical channel inversion strategy based on Monte Carlo sampling, which introduces additional computational error.
We show that when the computational error is small compared to the dynamic range of the error-free results, it scales with the square root of the number of gates.
arXiv Detail & Related papers (2022-01-20T00:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.