Dynamically Optimal Unraveling Schemes for Simulating Lindblad Equations
- URL: http://arxiv.org/abs/2509.19887v1
- Date: Wed, 24 Sep 2025 08:36:47 GMT
- Title: Dynamically Optimal Unraveling Schemes for Simulating Lindblad Equations
- Authors: Yu Cao, Mingfeng He, Xiantao Li,
- Abstract summary: We present a comprehensive parametric characterization of unraveling schemes driven by Brownian motion or Poisson processes.<n>We analytically derive dynamically optimal quantum state diffusion (DO-QSD) and dynamically optimal quantum jump process (DO-QJP)<n>Results demonstrate that the proposed DO-QSD scheme may achieve substantial reductions in the variance of observables and the resulting simulation error.
- Score: 5.556367784464714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic unraveling schemes are powerful computational tools for simulating Lindblad equations, offering significant reductions in memory requirements. However, this advantage is accompanied by increased stochastic uncertainty, and the question of optimal unraveling remains open. In this work, we investigate unraveling schemes driven by Brownian motion or Poisson processes and present a comprehensive parametric characterization of these approaches. For the case of a single Lindblad operator and one noise term, this parametric family provides a complete description for unraveling scheme with pathwise norm-preservation. We further analytically derive dynamically optimal quantum state diffusion (DO-QSD) and dynamically optimal quantum jump process (DO-QJP) that minimize the short-time growth of the variance of an observable. Compared to jump process ansatz, DO-QSD offers two notable advantages: firstly, the variance for DO-QSD can be rigorously shown not to exceed that of any jump-process ansatz locally in time; secondly, it has very simple expressions. Numerical results demonstrate that the proposed DO-QSD scheme may achieve substantial reductions in the variance of observables and the resulting simulation error.
Related papers
- Efficient quantum machine learning with inverse-probability algebraic corrections [2.7412662946127764]
Quantum neural networks (QNNs) provide expressive probabilistic models by leveraging quantum superposition and entanglement.<n>Quantum neural networks (QNNs) provide expressive probabilistic models by leveraging quantum superposition and entanglement.<n>Existing training approaches largely rely on gradient-based procedural optimization.
arXiv Detail & Related papers (2026-01-23T11:28:53Z) - Stochastic Quantum Hamiltonian Descent [5.8172845753874896]
We introduce Quantum Hamiltonian Descent (SQHD), a quantum optimization algorithm that integrates computational efficiency of methods with global exploration power of quantum dynamics.<n>We also propose a discrete-time gate gate that approximates dynamics while direct Lindbladian simulation, enabling these objectives on near-term quantum devices.
arXiv Detail & Related papers (2025-07-21T09:24:49Z) - MPQ-DMv2: Flexible Residual Mixed Precision Quantization for Low-Bit Diffusion Models with Temporal Distillation [74.34220141721231]
We present MPQ-DMv2, an improved textbfMixed textbfPrecision textbfQuantization framework for extremely low-bit textbfDiffusion textbfModels.
arXiv Detail & Related papers (2025-07-06T08:16:50Z) - Zassenhaus Expansion in Solving the Schrödinger Equation [0.0]
A fundamental challenge lies in approximating the unitary evolution operator ( e-imathcalHt ) where ( mathcalH ) is a large, typically non-commuting, Hermitian operator.<n>We present a refinement of the fixed-depth simulation framework introduced by E. K"okc"u et al, incorporating the second-order Zassenhaus expansion.<n>This yields a controlled, non-unitary approximation with error scaling as ( mathcalO(t
arXiv Detail & Related papers (2025-05-14T14:48:47Z) - Stochastic Optimization with Optimal Importance Sampling [49.484190237840714]
We propose an iterative-based algorithm that jointly updates the decision and the IS distribution without requiring time-scale separation between the two.<n>Our method achieves the lowest possible variable variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family.
arXiv Detail & Related papers (2025-04-04T16:10:18Z) - Towards robust variational quantum simulation of Lindblad dynamics via stochastic Magnus expansion [8.699304343980115]
We introduce a novel and general framework for the variational quantum simulation of Lindblad equations.<n>We demonstrate the effectiveness of our algorithm through numerical examples in both classical and quantum implementations.
arXiv Detail & Related papers (2025-03-28T02:37:56Z) - A quantum algorithm to simulate Lindblad master equations [1.104960878651584]
We present a quantum algorithm for simulating a family of Markovian master equations.
Our approach employs a second-order product formula for the Lindblad master equation.
arXiv Detail & Related papers (2024-06-18T16:08:11Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Accurate methods for the analysis of strong-drive effects in parametric
gates [94.70553167084388]
We show how to efficiently extract gate parameters using exact numerics and a perturbative analytical approach.
We identify optimal regimes of operation for different types of gates including $i$SWAP, controlled-Z, and CNOT.
arXiv Detail & Related papers (2021-07-06T02:02:54Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Amortized variance reduction for doubly stochastic objectives [17.064916635597417]
Approximate inference in complex probabilistic models requires optimisation of doubly objective functions.
Current approaches do not take into account how mini-batchity affects samplingity, resulting in sub-optimal variance reduction.
We propose a new approach in which we use a recognition network to cheaply approximate the optimal control variate for each mini-batch, with no additional gradient computations.
arXiv Detail & Related papers (2020-03-09T13:23:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.