On the commutator scaling in Hamiltonian simulation with multi-product formulas
- URL: http://arxiv.org/abs/2507.06557v2
- Date: Thu, 31 Jul 2025 08:13:36 GMT
- Title: On the commutator scaling in Hamiltonian simulation with multi-product formulas
- Authors: Kaoru Mizuta,
- Abstract summary: We show an alternative commutator-scaling error of MPF and derive its size-efficient cost properly inheriting the advantage in Trotterization.<n>We prove that Hamiltonian simulation by MPF certainly achieves the cost whose system-size dependence is as large as Trotterization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multi-product formula (MPF) is a promising approach for Hamiltonian simulation efficiently both in the system size $N$ and the inverse allowable error $1/\varepsilon$ by combining Trotterization and the linear combination of unitaries (LCU). It achieves poly-logarithmic cost in $1/\varepsilon$ like LCU [G. H. Low, V. Kliuchnikov, N. Wiebe, arXiv:1907.11679 (2019)]. The efficiency in $N$ is expected to come from the commutator scaling in Trotterization, and this appears to be confirmed by the error bound of MPF expressed by nested commutators [J. Aftab, D. An, K. Trivisa, arXiv:2403.08922 (2024)]. However, we point out that the efficiency of MPF in the system size $N$ is not exactly resolved yet in that the present error bound expressed by nested commutators is incompatible with the size-efficient complexity reflecting the commutator scaling. The problem is that $q$-fold nested commutators with arbitrarily large $q$ are involved in their requirement and error bound. The benefit of commutator scaling by locality is absent, and the cost efficient in $N$ becomes prohibited in general. In this paper, we show an alternative commutator-scaling error of MPF and derive its size-efficient cost properly inheriting the advantage in Trotterization. The requirement and the error bound in our analysis, derived by techniques from the Floquet-Magnus expansion, have a certain truncation order in the nested commutators and can fully exploit the locality. We prove that Hamiltonian simulation by MPF certainly achieves the cost whose system-size dependence is as large as Trotterization while keeping the $\mathrm{polylog}(1/\varepsilon)$-scaling like the LCU. Our results will provide improved or accurate error and cost also for various algorithms using interpolation or extrapolation of Trotterization.
Related papers
- Exponentially Reduced Circuit Depths Using Trotter Error Mitigation [0.0]
Richardson and extrapolation have been proposed to mitigate the Trotter error incurred by use of these formulae.
This work provides an improved, rigorous analysis of these techniques for calculating time-evolved expectation values.
We demonstrate that, to achieve error $epsilon$ in a simulation of time $T$ using a $ptextth$-order product formula with extrapolation, circuits of depths $Oleft(T1+1/p textrmpolylog (1/epsilon)right)$ are sufficient.
arXiv Detail & Related papers (2024-08-26T16:08:07Z) - Approximate Unitary $k$-Designs from Shallow, Low-Communication Circuits [6.844618776091756]
An approximate unitary $k$-design is an ensemble of unitaries and measure over which the average is close to a Haar random ensemble up to the first $k$ moments.
We construct multiplicative-error approximate unitary $k$-design ensembles for which communication between subsystems is $O(1)$ in the system size.
arXiv Detail & Related papers (2024-07-10T17:43:23Z) - Chain of Thought Empowers Transformers to Solve Inherently Serial Problems [57.58801785642868]
Chain of thought (CoT) is a highly effective method to improve the accuracy of large language models (LLMs) on arithmetics and symbolic reasoning tasks.
This work provides a theoretical understanding of the power of CoT for decoder-only transformers through the lens of expressiveness.
arXiv Detail & Related papers (2024-02-20T10:11:03Z) - Orthogonal Directions Constrained Gradient Method: from non-linear
equality constraints to Stiefel manifold [16.099883128428054]
We propose a novel algorithm, the Orthogonal Directions Constrained Method (ODCGM)
ODCGM only requires computing a projection onto a vector space.
We show that ODCGM exhibits the near-optimal oracle complexities.
arXiv Detail & Related papers (2023-03-16T12:25:53Z) - On the complexity of implementing Trotter steps [2.1369834525800138]
We develop methods to perform faster Trotter steps with complexity sublinear in number of terms.
We also realize faster Trotter steps when certain blocks of Hamiltonian coefficients have low rank.
Our result suggests the use of Hamiltonian structural properties as both necessary and sufficient to implement Trotter synthesis steps with lower gate complexity.
arXiv Detail & Related papers (2022-11-16T19:00:01Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - Convergence of Sparse Variational Inference in Gaussian Processes
Regression [29.636483122130027]
We show that a method with an overall computational cost of $mathcalO(log N)2D(loglog N)2)$ can be used to perform inference.
arXiv Detail & Related papers (2020-08-01T19:23:34Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.