Quantum state tomography via non-convex Riemannian gradient descent
- URL: http://arxiv.org/abs/2210.04717v1
- Date: Mon, 10 Oct 2022 14:19:23 GMT
- Title: Quantum state tomography via non-convex Riemannian gradient descent
- Authors: Ming-Chien Hsu, En-Jui Kuo, Wei-Hsuan Yu, Jian-Feng Cai, and Min-Hsiu
Hsieh
- Abstract summary: In this work, we derive a quantum state tomography scheme that improves the dependence on $kappa$ to the scale.
The improvement comes from the application the non-varian gradient (RGD) algorithms.
Our theoretical results of extremely fast convergence and nearly optimal error bounds are corroborated by numerical results.
- Score: 13.100184125419691
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The recovery of an unknown density matrix of large size requires huge
computational resources. The recent Factored Gradient Descent (FGD) algorithm
and its variants achieved state-of-the-art performance since they could
mitigate the dimensionality barrier by utilizing some of the underlying
structures of the density matrix. Despite their theoretical guarantee of a
linear convergence rate, the convergence in practical scenarios is still slow
because the contracting factor of the FGD algorithms depends on the condition
number $\kappa$ of the ground truth state. Consequently, the total number of
iterations can be as large as $O(\sqrt{\kappa}\ln(\frac{1}{\varepsilon}))$ to
achieve the estimation error $\varepsilon$. In this work, we derive a quantum
state tomography scheme that improves the dependence on $\kappa$ to the
logarithmic scale; namely, our algorithm could achieve the approximation error
$\varepsilon$ in $O(\ln(\frac{1}{\kappa\varepsilon}))$ steps. The improvement
comes from the application of the non-convex Riemannian gradient descent (RGD).
The contracting factor in our approach is thus a universal constant that is
independent of the given state. Our theoretical results of extremely fast
convergence and nearly optimal error bounds are corroborated by numerical
results.
Related papers
- Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization [48.84672493756553]
We propose a query-efficient and accurate estimator for gradients that uses only $Obig(slogfrac dsbig)$ queries per step.
Our proposed GraCe generalizes the Indyk--Price--Woodruff (IPW) algorithm in compressed sensing from linear measurements to nonlinear functions.
arXiv Detail & Related papers (2024-05-27T03:52:53Z) - Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods [25.831462008050387]
The Gradient Descent (SGD) algorithm has triggered people's interest due to its good performance in practice but lack of theoretical understanding.
It still remains unclear whether the lastiterate convergence can be provably extended to wider composite optimization and non-Euclidean norms.
arXiv Detail & Related papers (2023-12-13T21:41:06Z) - Convergence of Adam Under Relaxed Assumptions [72.24779199744954]
We show that Adam converges to $epsilon$-stationary points with $O(epsilon-4)$ gradient complexity under far more realistic conditions.
We also propose a variance-reduced version of Adam with an accelerated gradient complexity of $O(epsilon-3)$.
arXiv Detail & Related papers (2023-04-27T06:27:37Z) - On the Last Iterate Convergence of Momentum Methods [32.60494006038902]
We prove for the first time that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate suffers from an error.
We show that in the setting with convex and smooth functions, our new SGDM algorithm automatically converges at a rate of $O(fraclog TT)$.
arXiv Detail & Related papers (2021-02-13T21:16:16Z) - Faster Convergence of Stochastic Gradient Langevin Dynamics for
Non-Log-Concave Sampling [110.88857917726276]
We provide a new convergence analysis of gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave.
At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain.
arXiv Detail & Related papers (2020-10-19T15:23:18Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - Differentially Quantized Gradient Methods [53.3186247068836]
We show that Differentially Quantized Gradient Descent (DQ-GD) attains a linear contraction factor of $maxsigma_mathrmGD, rhon 2-R$.
No algorithm within a certain class can converge faster than $maxsigma_mathrmGD, 2-R$.
arXiv Detail & Related papers (2020-02-06T20:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.