Chain Rules for Renyi Information Combining
- URL: http://arxiv.org/abs/2305.02589v1
- Date: Thu, 4 May 2023 06:47:50 GMT
- Title: Chain Rules for Renyi Information Combining
- Authors: Christoph Hirche, Xinyue Guan, Marco Tomamichel
- Abstract summary: Bounds on information combining are a fundamental tool in coding theory.
This work provides new information combining bounds for the Arimoto Renyi entropy.
In the second part, we generalize the chain rule to the quantum setting and show how they allow us to generalize results and conjectures.
- Score: 14.824891788575421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bounds on information combining are a fundamental tool in coding theory, in
particular when analyzing polar codes and belief propagation. They usually
bound the evolution of random variables with respect to their Shannon entropy.
In recent work this approach was generalized to Renyi $\alpha$-entropies.
However, due to the lack of a traditional chain rule for Renyi entropies the
picture remained incomplete. In this work we establish the missing link by
providing Renyi chain rules connecting different definitions of Renyi entropies
by Hayashi and Arimoto. This allows us to provide new information combining
bounds for the Arimoto Renyi entropy. In the second part, we generalize the
chain rule to the quantum setting and show how they allow us to generalize
results and conjectures previously only given for the von Neumann entropy. In
the special case of $\alpha=2$ we give the first optimal information combining
bounds with quantum side information.
Related papers
- Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - Geometric relative entropies and barycentric Rényi divergences [16.385815610837167]
monotone quantum relative entropies define monotone R'enyi quantities whenever $P$ is a probability measure.
We show that monotone quantum relative entropies define monotone R'enyi quantities whenever $P$ is a probability measure.
arXiv Detail & Related papers (2022-07-28T17:58:59Z) - Cone-Restricted Information Theory [5.148939336441987]
We show which results in quantum information theory rely upon the positive semidefinite cone and which can be generalized.
We present parallel results for the extended conditional min-entropy.
In doing so, we extend the notion of k-superpositive channels to superchannels.
arXiv Detail & Related papers (2022-06-09T06:27:48Z) - Tight Exponential Analysis for Smoothing the Max-Relative Entropy and
for Quantum Privacy Amplification [56.61325554836984]
The max-relative entropy together with its smoothed version is a basic tool in quantum information theory.
We derive the exact exponent for the decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance.
arXiv Detail & Related papers (2021-11-01T16:35:41Z) - R\'enyi divergence inequalities via interpolation, with applications to
generalised entropic uncertainty relations [91.3755431537592]
We investigate quantum R'enyi entropic quantities, specifically those derived from'sandwiched' divergence.
We present R'enyi mutual information decomposition rules, a new approach to the R'enyi conditional entropy tripartite chain rules and a more general bipartite comparison.
arXiv Detail & Related papers (2021-06-19T04:06:23Z) - From Classical to Quantum: Uniform Continuity Bounds on Entropies in
Infinite Dimensions [6.553031877558699]
We prove uniform continuity bounds for entropies of classical random variables on an infinite state space and of quantum states of infinite-dimensional systems.
The proof relies on a new mean-constrained Fano-type inequality and the notion of maximal coupling of random variables.
arXiv Detail & Related papers (2021-04-05T17:18:42Z) - Shannon Entropy Rate of Hidden Markov Processes [77.34726150561087]
We show how to calculate entropy rates for hidden Markov chains.
We also show how this method gives the minimal set of infinite predictive features.
A sequel addresses the challenge's second part on structure.
arXiv Detail & Related papers (2020-08-29T00:48:17Z) - Relevant OTOC operators: footprints of the classical dynamics [68.8204255655161]
The OTOC-RE theorem relates the OTOCs summed over a complete base of operators to the second Renyi entropy.
We show that the sum over a small set of relevant operators, is enough in order to obtain a very good approximation for the entropy.
In turn, this provides with an alternative natural indicator of complexity, i.e. the scaling of the number of relevant operators with time.
arXiv Detail & Related papers (2020-07-31T19:23:26Z) - A tight uniform continuity bound for the Arimoto-R\'enyi conditional
entropy and its extension to classical-quantum states [7.741539072749043]
We prove a tight uniform continuity bound for Arimoto's version of the conditional $alpha$-R'enyi entropy, for the range $alpha in [0, 1)$.
We apply our result to obtain a tight uniform continuity bound for the conditional $alpha$-R'enyi entropy of a classical-quantum state, for $alpha$ in the same range as above.
arXiv Detail & Related papers (2020-07-09T20:20:15Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.