The Rate-Distortion-Perception Tradeoff: The Role of Common Randomness
- URL: http://arxiv.org/abs/2202.04147v1
- Date: Tue, 8 Feb 2022 21:14:57 GMT
- Title: The Rate-Distortion-Perception Tradeoff: The Role of Common Randomness
- Authors: Aaron B. Wagner
- Abstract summary: This paper focuses on the case of perfect realism, which coincides with the problem of distribution-preserving lossy compression.
The existing tradeoff is recovered by allowing for the amount of common randomness to be infinite.
- Score: 23.37690979017006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A rate-distortion-perception (RDP) tradeoff has recently been proposed by
Blau and Michaeli and also Matsumoto. Focusing on the case of perfect realism,
which coincides with the problem of distribution-preserving lossy compression
studied by Li et al., a coding theorem for the RDP tradeoff that allows for a
specified amount of common randomness between the encoder and decoder is
provided. The existing RDP tradeoff is recovered by allowing for the amount of
common randomness to be infinite. The quadratic Gaussian case is examined in
detail.
Related papers
- The Rate-Distortion-Perception Trade-off: The Role of Private Randomness [53.81648040452621]
We show that private randomness is not useful if the compression rate is lower than the entropy of the source.
We characterize the corresponding rate-distortion trade-off and show that private randomness is not useful if the compression rate is lower than the entropy of the source.
arXiv Detail & Related papers (2024-04-01T13:36:01Z) - Output-Constrained Lossy Source Coding With Application to Rate-Distortion-Perception Theory [9.464977414419332]
The distortion-rate function of output-constrained lossy source coding with limited common randomness is analyzed.
An explicit expression is obtained when both source and reconstruction distributions are Gaussian.
arXiv Detail & Related papers (2024-03-21T21:51:36Z) - Rate-Distortion-Perception Tradeoff Based on the
Conditional-Distribution Perception Measure [33.084834042565895]
We study the rate-distortionperception (RDP) tradeoff for a memoryless source model in the limit of large blocklengths.
Our perception measure is based on a divergence between the distributions of the source and reconstruction sequences conditioned on the encoder output.
arXiv Detail & Related papers (2024-01-22T18:49:56Z) - The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model [61.87673435273466]
This paper investigates model robustness in reinforcement learning (RL) to reduce the sim-to-real gap in practice.
We adopt the framework of distributionally robust Markov decision processes (RMDPs), aimed at learning a policy that optimize the worst-case performance when the deployed environment falls within a prescribed uncertainty set around the nominal MDP.
arXiv Detail & Related papers (2023-05-26T02:32:03Z) - Policy Evaluation in Distributional LQR [70.63903506291383]
We provide a closed-form expression of the distribution of the random return.
We show that this distribution can be approximated by a finite number of random variables.
Using the approximate return distribution, we propose a zeroth-order policy gradient algorithm for risk-averse LQR.
arXiv Detail & Related papers (2023-03-23T20:27:40Z) - Contextual bandits with concave rewards, and an application to fair
ranking [108.48223948875685]
We present the first algorithm with provably vanishing regret for Contextual Bandits with Concave Rewards (CBCR)
We derive a novel reduction from the CBCR regret to the regret of a scalar-reward problem.
Motivated by fairness in recommendation, we describe a special case of CBCR with rankings and fairness-aware objectives.
arXiv Detail & Related papers (2022-10-18T16:11:55Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Universal Rate-Distortion-Perception Representations for Lossy
Compression [31.28856752892628]
We consider the notion of universal representations in which one may fix an encoder and vary the decoder to achieve any point within a collection of distortion and perception constraints.
We prove that the corresponding information-theoretic universal rate-distortion-perception is operationally achievable in an approximate sense.
arXiv Detail & Related papers (2021-06-18T18:52:08Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.