Average R\'{e}nyi Entropy of a Subsystem in Random Pure State
- URL: http://arxiv.org/abs/2301.09074v2
- Date: Tue, 16 Jan 2024 05:01:14 GMT
- Title: Average R\'{e}nyi Entropy of a Subsystem in Random Pure State
- Authors: MuSeong Kim, Mi-Ra Hwang, Eylee Jung, and DaeKil Park
- Abstract summary: We plot the $ln m$-dependence of the quantum information derived from $widetildeS_alpha (m,n)$.
It is remarkable to note that the nearly vanishing region of the information becomes shorten with increasing $alpha$, and eventually disappears in the limit of $alpha rightarrow infty$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we examine the average R\'{e}nyi entropy $S_{\alpha}$ of a
subsystem $A$ when the whole composite system $AB$ is a random pure state. We
assume that the Hilbert space dimensions of $A$ and $AB$ are $m$ and $m n$
respectively. First, we compute the average R\'{e}nyi entropy analytically for
$m = \alpha = 2$. We compare this analytical result with the approximate
average R\'{e}nyi entropy, which is shown to be very close. For general case we
compute the average of the approximate R\'{e}nyi entropy
$\widetilde{S}_{\alpha} (m,n)$ analytically. When $1 \ll n$,
$\widetilde{S}_{\alpha} (m,n)$ reduces to $\ln m - \frac{\alpha}{2 n} (m -
m^{-1})$, which is in agreement with the asymptotic expression of the average
von Neumann entropy. Based on the analytic result of $\widetilde{S}_{\alpha}
(m,n)$ we plot the $\ln m$-dependence of the quantum information derived from
$\widetilde{S}_{\alpha} (m,n)$. It is remarkable to note that the nearly
vanishing region of the information becomes shorten with increasing $\alpha$,
and eventually disappears in the limit of $\alpha \rightarrow \infty$. The
physical implication of the result is briefly discussed.
Related papers
- Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Bounds on Renyi entropy growth in many-body quantum systems [0.0]
We prove rigorous bounds on the growth of $alpha$-Renyi entropies $S_alpha(t)$.
For completely non-local Hamiltonians, we show that the instantaneous growth rates $|S'_alpha(t)|$ can be exponentially larger than $|S'_alpha(t)|$.
arXiv Detail & Related papers (2022-12-14T19:00:01Z) - Universality in the tripartite information after global quenches [0.0]
We study the R'enyi-$alpha$ tripartite information $I_3(alpha)(A,B,C)$ in clean 1D systems with local Hamiltonians.
We identify different classes of states that, under time evolution with translationally in Hamiltonians, locally relax to states with a nonzero (R'enyi) tripartite information.
arXiv Detail & Related papers (2022-09-28T17:18:00Z) - A spectral least-squares-type method for heavy-tailed corrupted
regression with unknown covariance \& heterogeneous noise [2.019622939313173]
We revisit heavy-tailed corrupted least-squares linear regression assuming to have a corrupted $n$-sized label-feature sample of at most $epsilon n$ arbitrary outliers.
We propose a near-optimal computationally tractable estimator, based on the power method, assuming no knowledge on $(Sigma,Xi) nor the operator norm of $Xi$.
arXiv Detail & Related papers (2022-09-06T23:37:31Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Learning Mixtures of Spherical Gaussians via Fourier Analysis [0.5381004207943596]
We find that a bound on the sample and computational complexity was previously unknown when $omega(1) leq d leq O(log k)$.
These authors also show that the sample of complexity of a random mixture of gaussians in a ball of radius $d$ in $d$ dimensions, when $d$ is $Theta(sqrtd)$ in $d$ dimensions, when $d$ is at least $poly(k, frac1delta)$.
arXiv Detail & Related papers (2020-04-13T08:06:29Z) - List-Decodable Subspace Recovery: Dimension Independent Error in
Polynomial Time [5.812499828391904]
In list-decodable subspace recovery, the input is a collection of $n$ points $alpha n$ (for some $alpha ll 1/2$) of which are drawn i.i.d. from a distribution $mathcalD$.
In this work, we improve on results on all three fronts: emphcertifiable anti-concentration error via a faster fixed-polynomial running time.
arXiv Detail & Related papers (2020-02-12T18:30:09Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.