Universal chain rules from entropic triangle inequalities
- URL: http://arxiv.org/abs/2412.06723v1
- Date: Mon, 09 Dec 2024 18:10:28 GMT
- Title: Universal chain rules from entropic triangle inequalities
- Authors: Ashutosh Marwah, Frédéric Dupuis,
- Abstract summary: We lower bound the smooth min-entropy of an $n$-partite system in terms of, roughly speaking, equally strong entropies of the individual subsystems.
We also prove an approximate version of the entropy accumulation theorem, which relaxes the conditions required on the state to bound its smooth min-entropy.
- Score: 1.8416014644193066
- License:
- Abstract: The von Neumann entropy of an $n$-partite system $A_1^n$ given a system $B$ can be written as the sum of the von Neumann entropies of the individual subsystems $A_k$ given $A_1^{k-1}$ and $B$. While it is known that such a chain rule does not hold for the smooth min-entropy, we prove a counterpart of this for a variant of the smooth min-entropy, which is equal to the conventional smooth min-entropy up to a constant. This enables us to lower bound the smooth min-entropy of an $n$-partite system in terms of, roughly speaking, equally strong entropies of the individual subsystems. We call this a universal chain rule for the smooth min-entropy, since it is applicable for all values of $n$. Using duality, we also derive a similar relation for the smooth max-entropy. Our proof utilises the entropic triangle inequalities for analysing approximation chains. Additionally, we also prove an approximate version of the entropy accumulation theorem, which significantly relaxes the conditions required on the state to bound its smooth min-entropy. In particular, it does not require the state to be produced through a sequential process like previous entropy accumulation type bounds. In our upcoming companion paper, we use it to prove the security of parallel device independent quantum key distribution.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Efficient unitary designs and pseudorandom unitaries from permutations [35.66857288673615]
We show that products exponentiated sums of $S(N)$ permutations with random phases match the first $2Omega(n)$ moments of the Haar measure.
The heart of our proof is a conceptual connection between the large dimension (large-$N$) expansion in random matrix theory and the method.
arXiv Detail & Related papers (2024-04-25T17:08:34Z) - Smooth min-entropy lower bounds for approximation chains [0.0]
We prove a simple entropic triangle inequality, which allows us to bound the smooth min-entropy of a state in terms of the R'enyi entropy of an arbitrary auxiliary state.
Using this triangle inequality, we create lower bounds for the smooth min-entropy of a state in terms of the entropies of its approximation chain in various scenarios.
arXiv Detail & Related papers (2023-08-22T18:55:16Z) - Tip of the Quantum Entropy Cone [1.1606619391009658]
Relations among von Neumann entropies of different parts of an $N$-partite quantum system have direct impact on our understanding of diverse situations.
We show that while it is always possible to up-scale an entropy vector to arbitrary integer multiples it is not always possible to down-scale it to arbitrarily small size.
arXiv Detail & Related papers (2023-05-31T21:37:24Z) - Fast Rates for Maximum Entropy Exploration [52.946307632704645]
We address the challenge of exploration in reinforcement learning (RL) when the agent operates in an unknown environment with sparse or no rewards.
We study the maximum entropy exploration problem two different types.
For visitation entropy, we propose a game-theoretic algorithm that has $widetildemathcalO(H3S2A/varepsilon2)$ sample complexity.
For the trajectory entropy, we propose a simple algorithm that has a sample of complexity of order $widetildemathcalO(mathrmpoly(S,
arXiv Detail & Related papers (2023-03-14T16:51:14Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - A new entanglement measure based dual entropy [7.95085289592294]
We define $St$-entropy entanglement based on von Neumann entropy and its complementary dual.
We prove a new type of entanglement inequality in terms of $St$-entropy entanglement for quantum entangled networks.
arXiv Detail & Related papers (2022-04-15T10:08:12Z) - Sublinear quantum algorithms for estimating von Neumann entropy [18.30551855632791]
We study the problem of obtaining estimates to within a multiplicative factor $gamma>1$ of the Shannon entropy of probability distributions and the von Neumann entropy of mixed quantum states.
We work with the quantum purified query access model, which can handle both classical probability distributions and mixed quantum states, and is the most general input model considered in the literature.
arXiv Detail & Related papers (2021-11-22T12:00:45Z) - From Classical to Quantum: Uniform Continuity Bounds on Entropies in Infinite Dimensions [12.958449178903727]
We prove uniform continuity bounds for entropies of classical random variables on an infinite state space and of quantum states of infinite-dimensional systems.
The proof relies on a new mean-constrained Fano-type inequality and the notion of maximal coupling of random variables.
arXiv Detail & Related papers (2021-04-05T17:18:42Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.