Second-order discretization of Dyson series: iterative method, numerical analysis and applications in open quantum systems
- URL: http://arxiv.org/abs/2510.15287v1
- Date: Fri, 17 Oct 2025 03:55:09 GMT
- Title: Second-order discretization of Dyson series: iterative method, numerical analysis and applications in open quantum systems
- Authors: Zhenning Cai, Yixiao Sun, Geshuo Wang,
- Abstract summary: We propose a general strategy to discretize the Dyson series without applying numerical quadrature to high-dimensional integrals.<n>The resulting discretization can also be interpreted as a Strang splitting combined with a Taylor expansion.<n>We develop a numerically exact iterative method for simulation system-bath dynamics.
- Score: 0.43012765978447565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a general strategy to discretize the Dyson series without applying direct numerical quadrature to high-dimensional integrals, and extend this framework to open quantum systems. The resulting discretization can also be interpreted as a Strang splitting combined with a Taylor expansion. Based on this formulation, we develop a numerically exact iterative method for simulation system-bath dynamics. We propose two numerical schemes, which are first-order and second-order in time step $\Delta t$ respectively. We perform a rigorous numerical analysis to establish the convergence orders of both schemes, proving that the global error decreases as $\mathcal{O}(\Delta t)$ and $\mathcal{O}(\Delta t^2)$ for the first- and second-order methods, respectively. In the second-order scheme, we can safely omitted most terms arising from the Strang splitting and Taylor expansion while maintaining second-order accuracy, leading to a substantial reduction in computational complexity. For the second-order method, we achieves a time complexity of $\mathcal{O}(M^3 2^{2K_{\max}} K_{\max}^2)$ and a space complexity of $\mathcal{O}(M^2 2^{2K_{\max}} K_{\max})$ where $M$ denotes the number of system levels and $K_{\max}$ the number of time steps within the memory length. Compared with existing methods, our approach requires substantially less memory and computational effort for multilevel systems ($M\geqslant 3$). Numerical experiments are carried out to illustrate the validity and efficiency of our method.
Related papers
- Diffusion Computation versus Quantum Computation: A Comparative Model for Order Finding and Factoring [0.0]
We study a hybrid computational model for integer factorization in which the only non-classical resource is access to an emphiterated diffusion process on a finite graph.<n>Our comparison with Shor's algorithm is emphconceptual and model-based.<n>We report complexity in two cost measures: digital steps and diffusion steps.
arXiv Detail & Related papers (2026-01-05T19:45:38Z) - Two Quantum Algorithms for Nonlinear Reaction-Diffusion Equation using Chebyshev Approximation Method [1.775629639045375]
We present two new quantum algorithms for reaction-diffusion equations that employ the truncated Chebyshevpoly approximation.<n>We derive the sufficient conditions for the diagonalization of the Carleman embedding matrix.<n>The success of the diagonalization is based on a conjecture that a specific trigonometric equation has no integral solution.
arXiv Detail & Related papers (2025-10-21T19:14:23Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Fast Minimization of Expected Logarithmic Loss via Stochastic Dual
Averaging [8.990961435218544]
We propose a first-order algorithm named $B$-sample dual averaging with the logarithmic barrier.
For the Poisson inverse problem, our algorithm attains an $varepsilon$ solution in $smashtildeO(d3/varepsilon2)$ time.
When computing the maximum-likelihood estimate for quantum state tomography, our algorithm yields an $varepsilon$-optimal solution in $smashtildeO(d3/varepsilon2)$ time.
arXiv Detail & Related papers (2023-11-05T03:33:44Z) - Sublinear scaling in non-Markovian open quantum systems simulations [0.0]
We introduce a numerically exact algorithm to calculate process tensors.
Our approach requires only $mathcalO(nlog n)$ singular value decompositions for environments with infinite memory.
arXiv Detail & Related papers (2023-04-11T15:40:33Z) - Second-order optimization with lazy Hessians [55.51077907483634]
We analyze Newton's lazy Hessian updates for solving general possibly non-linear optimization problems.
We reuse a previously seen Hessian iteration while computing new gradients at each step of the method.
arXiv Detail & Related papers (2022-12-01T18:58:26Z) - Explicit Second-Order Min-Max Optimization: Practical Algorithms and Complexity Analysis [71.05708939639537]
We propose and analyze several inexact regularized Newton-type methods for finding a global saddle point of emphconcave unconstrained problems.<n>Our method improves the existing line-search-based min-max optimization by shaving off an $O(loglog(1/eps)$ factor in the required number of Schur decompositions.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Continuous Submodular Maximization: Beyond DR-Submodularity [48.04323002262095]
We first prove a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+.
We then propose Coordinate-Ascent++, that achieves tight $(1-1/e-varepsilon)$-approximation guarantee while performing the same number of iterations.
The computation of each round of Coordinate-Ascent++ can be easily parallelized so that the computational cost per machine scales as $O(n/sqrtvarepsilon+nlog n)$.
arXiv Detail & Related papers (2020-06-21T06:57:59Z) - Second-order Conditional Gradient Sliding [70.88478428882871]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.<n>The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.<n>It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.