Reducing Circuit Depth in Lindblad Simulation via Step-Size Extrapolation
- URL: http://arxiv.org/abs/2507.22341v1
- Date: Wed, 30 Jul 2025 02:56:07 GMT
- Title: Reducing Circuit Depth in Lindblad Simulation via Step-Size Extrapolation
- Authors: Pegah Mohammadipour, Xiantao Li,
- Abstract summary: We study algorithmic error mitigation via Richardson-style extrapolation for quantum simulations of open quantum systems modelled by the Lindblad equation.<n>We show that an extrapolator reduces the maximum circuit depth needed for accuracy $varepsilon$ from $mathcalO ((lT)2/varepsilon)$ to polylogarithmic $mathcalO ((lT)2/varepsilon)$ scaling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study algorithmic error mitigation via Richardson-style extrapolation for quantum simulations of open quantum systems modelled by the Lindblad equation. Focusing on two specific first-order quantum algorithms, we perform a backward-error analysis to obtain a step-size expansion of the density operator with explicit coefficient bounds. These bounds supply the necessary smoothness for analyzing Richardson extrapolation, allowing us to bound both the deterministic bias and the shot-noise variance that arise in post-processing. For a Lindblad dynamics with generator bounded by $l$, our main theorem shows that an $n=\Omega (\log(1/\varepsilon))$-point extrapolator reduces the maximum circuit depth needed for accuracy $\varepsilon$ from polynomial $\mathcal{O} ((lT)^{2}/\varepsilon)$ to polylogarithmic $\mathcal{O} ((lT)^{2} \log l \log^2(1/\varepsilon))$ scaling, an exponential improvement in~$1/\varepsilon$, while keeping sampling complexity to the standard $1/\varepsilon^2$ level, thus extending such results for Hamiltonian simulations to Lindblad simulations. Several numerical experiments illustrate the practical viability of the method.
Related papers
- A Randomized Method for Simulating Lindblad Equations and Thermal State Preparation [24.332332092371303]
We study a qDRIFT-type randomized method to simulate Lindblad dynamics by decomposing its generator into an ensemble of Lindbladians.<n>We derive a new quantum Gibbs sampler algorithm that utilizes jump operators sampled from a Clifford-random circuit.
arXiv Detail & Related papers (2024-07-09T06:55:19Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - An efficient quantum algorithm for generation of ab initio n-th order susceptibilities for non-linear spectroscpies [0.13980986259786224]
We develop and analyze a fault-tolerant quantum algorithm for computing $n$-th order response properties.
We use a semi-classical description in which the electronic degrees of freedom are treated quantum mechanically and the light is treated as a classical field.
arXiv Detail & Related papers (2024-04-01T20:04:49Z) - Quantum Simulation of Lindbladian Dynamics via Repeated Interactions [0.5097809301149342]
We make use of an approximate correspondence between Lindbladian dynamics and evolution based on Repeated Interaction (RI) CPTP maps.<n>We show that the number of interactions needed to simulate the Liouvillian $etmathcalL$ within error $epsilon$ scales in a weak coupling limit.
arXiv Detail & Related papers (2023-12-08T21:17:16Z) - Wave Matrix Lindbladization I: Quantum Programs for Simulating Markovian
Dynamics [6.345523830122166]
Density Matrix Exponentiation is a technique for simulating Hamiltonian dynamics when the Hamiltonian to be simulated is available as a quantum state.
We present a natural analogue to this technique, for simulating Markovian dynamics governed by the well known Lindblad master equation.
We propose a quantum algorithm for this task, called Wave Matrix Lindbladization, and we also investigate its sample complexity.
arXiv Detail & Related papers (2023-07-27T15:22:04Z) - Efficient Sampling of Stochastic Differential Equations with Positive
Semi-Definite Models [91.22420505636006]
This paper deals with the problem of efficient sampling from a differential equation, given the drift function and the diffusion matrix.
It is possible to obtain independent and identically distributed (i.i.d.) samples at precision $varepsilon$ with a cost that is $m2 d log (1/varepsilon)$
Our results suggest that as the true solution gets smoother, we can circumvent the curse of dimensionality without requiring any sort of convexity.
arXiv Detail & Related papers (2023-03-30T02:50:49Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Enhancing the Quantum Linear Systems Algorithm using Richardson
Extrapolation [0.8057006406834467]
We present a quantum algorithm to solve systems of linear equations of the form $Amathbfx=mathbfb$.
The algorithm achieves an exponential improvement with respect to $N$ over classical methods.
arXiv Detail & Related papers (2020-09-09T18:00:09Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.