Exponentially Reduced Circuit Depths Using Trotter Error Mitigation
- URL: http://arxiv.org/abs/2408.14385v2
- Date: Mon, 7 Oct 2024 21:39:58 GMT
- Title: Exponentially Reduced Circuit Depths Using Trotter Error Mitigation
- Authors: James D. Watson, Jacob Watkins,
- Abstract summary: Richardson and extrapolation have been proposed to mitigate the Trotter error incurred by use of these formulae.
This work provides an improved, rigorous analysis of these techniques for calculating time-evolved expectation values.
We demonstrate that, to achieve error $epsilon$ in a simulation of time $T$ using a $ptextth$-order product formula with extrapolation, circuits of depths $Oleft(T1+1/p textrmpolylog (1/epsilon)right)$ are sufficient.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Product formulae are a popular class of digital quantum simulation algorithms due to their conceptual simplicity, low overhead, and performance which often exceeds theoretical expectations. Recently, Richardson extrapolation and polynomial interpolation have been proposed to mitigate the Trotter error incurred by use of these formulae. This work provides an improved, rigorous analysis of these techniques for the task of calculating time-evolved expectation values. We demonstrate that, to achieve error $\epsilon$ in a simulation of time $T$ using a $p^\text{th}$-order product formula with extrapolation, circuits depths of $O\left(T^{1+1/p} \textrm{polylog}(1/\epsilon)\right)$ are sufficient -- an exponential improvement in the precision over product formulae alone. Furthermore, we achieve commutator scaling, improve the complexity with $T$, and do not require fractional implementations of Trotter steps. Our results provide a more accurate characterisation of the algorithmic error mitigation techniques currently proposed to reduce Trotter error.
Related papers
- Deep Inertia $L_p$ Half-Quadratic Splitting Unrolling Network for Sparse View CT Reconstruction [20.632166806596278]
Sparse view computed tomography (CT) reconstruction poses a challenging ill-posed inverse problem, necessitating effective regularization techniques.
We employ $L_p$-norm regularization to induce sparsity and introduce inertial steps, leading to the development of the inertial $L_p$-norm half-quadratic splitting algorithm.
Our proposed algorithm surpasses existing methods, particularly excelling in fewer scanned views and complex noise conditions.
arXiv Detail & Related papers (2024-08-13T03:32:59Z) - Accelerated Stochastic Min-Max Optimization Based on Bias-corrected Momentum [30.01198677588252]
First-order algorithms require at least $mathcalO(varepsilonepsilon-4)$ complexity to find an $varepsilon-stationary point.
We introduce novel momentum algorithms utilizing efficient variable complexity.
The effectiveness of the method is validated through robust logistic regression using real-world datasets.
arXiv Detail & Related papers (2024-06-18T20:14:52Z) - Uncertainty quantification for iterative algorithms in linear models with application to early stopping [4.150180443030652]
This paper investigates the iterates $hbb1,dots,hbbT$ obtained from iterative algorithms in high-dimensional linear regression problems.
The analysis and proposed estimators are applicable to Gradient Descent (GD), GD and their accelerated variants such as Fast Iterative Soft-Thresholding (FISTA)
arXiv Detail & Related papers (2024-04-27T10:20:41Z) - Fast Nonlinear Two-Time-Scale Stochastic Approximation: Achieving $O(1/k)$ Finite-Sample Complexity [2.5382095320488665]
This paper proposes to develop a new variant of the two-time-scale monotone approximation to find the roots of two coupled nonlinear operators.
Our key idea is to leverage the classic Ruppert-Polyak averaging technique to dynamically estimate the operators through their samples.
The estimated values of these averaging steps will then be used in the two-time-scale approximation updates to find the desired solution.
arXiv Detail & Related papers (2024-01-23T13:44:15Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Self-healing of Trotter error in digital adiabatic state preparation [52.77024349608834]
We prove that the first-order Trotterization of a complete adiabatic evolution has a cumulative infidelity that scales as $mathcal O(T-2 delta t2)$ instead of $mathcal O(T2delta t2)$ expected from general Trotter error bounds.
This result suggests a self-healing mechanism and explains why, despite increasing $T$, infidelities for fixed-$delta t$ digitized evolutions still decrease for a wide variety of Hamiltonians.
arXiv Detail & Related papers (2022-09-13T18:05:07Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Alternatives to a nonhomogeneous partial differential equation quantum
algorithm [52.77024349608834]
We propose a quantum algorithm for solving nonhomogeneous linear partial differential equations of the form $Apsi(textbfr)=f(textbfr)$.
These achievements enable easier experimental implementation of the quantum algorithm based on nowadays technology.
arXiv Detail & Related papers (2022-05-11T14:29:39Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Higher-order Derivatives of Weighted Finite-state Machines [68.43084108204741]
This work examines the computation of higher-order derivatives with respect to the normalization constant for weighted finite-state machines.
We provide a general algorithm for evaluating derivatives of all orders, which has not been previously described in the literature.
Our algorithm is significantly faster than prior algorithms.
arXiv Detail & Related papers (2021-06-01T19:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.