Robust Lindbladian Estimation for Quantum Dynamics
- URL: http://arxiv.org/abs/2507.07912v1
- Date: Thu, 10 Jul 2025 16:45:37 GMT
- Title: Robust Lindbladian Estimation for Quantum Dynamics
- Authors: Yinchen Liu, James R. Seddon, Tamara Kohler, Emilio Onorati, Toby S. Cubitt,
- Abstract summary: We revisit the problem of fitting Lindbladian models to the outputs of quantum process tomography.<n>We introduce algorithmic improvements to logarithm search, demonstrating that it can be applied in practice to settings relevant for current quantum computing hardware.<n>We additionally augment the task of Lindbladian fitting with techniques from gate set tomography to improve robustness against state preparation and measurement errors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit the problem of fitting Lindbladian models to the outputs of quantum process tomography. A sequence of prior theoretical works approached the problem by considering whether there exists a Lindbladian generator close to a matrix logarithm of the tomographically estimated transfer matrix. This technique must take into account the non-uniqueness of the matrix logarithm, so that in general multiple branches of the logarithm must be checked. In contrast, all practical demonstrations of Lindbladian fitting on real experimental data have to our knowledge eschewed logarithm search, instead adopting direct numerical optimisation or ad-hoc approaches tailored to a particular experimental realisation. In our work, we introduce algorithmic improvements to logarithm search, demonstrating that it can be applied in practice to settings relevant for current quantum computing hardware. We additionally augment the task of Lindbladian fitting with techniques from gate set tomography to improve robustness against state preparation and measurement (SPAM) errors, which can otherwise obfuscate estimates of the model underlying the process of interest. We benchmark our techniques extensively using simulated tomographic data employing a range of realistic error models, before demonstrating their application to tomographic data collected from real superconducting-qubit hardware.
Related papers
- Scalable Gaussian process modeling of parametrized spatio-temporal fields [2.005299372367689]
We develop a scalable framework for learning of parametized equations over fixed or parameter-temporal domains.<n>A key feature of our approach is the efficient computation of the posterior variance at essentially the same computational cost as the posterior mean.<n>Results establish the proposed framework as an effective tool for data-driven surrogate modeling, particularly when uncertainty estimates are required for downstream tasks.
arXiv Detail & Related papers (2026-02-27T20:16:21Z) - Large-scale Lindblad learning from time-series data [0.1749935196721634]
We develop a protocol for learning a time-independent Lindblad model for operations that can be applied repeatedly on a quantum computer.<n>We demonstrate the approach by learning the Lindbladian for a full layer of gates on a 156-qubit superconducting quantum processor.
arXiv Detail & Related papers (2025-12-09T01:50:14Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Efficient Prior Calibration From Indirect Data [5.588334720483076]
This paper is concerned with learning the prior model from data, in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process.<n>An efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map.
arXiv Detail & Related papers (2024-05-28T08:34:41Z) - Conditionally-Conjugate Gaussian Process Factor Analysis for Spike Count Data via Data Augmentation [8.114880112033644]
Recently, GPFA has been extended to model spike count data.
We propose a conditionally-conjugate Gaussian process factor analysis (ccGPFA) resulting in both analytically and computationally tractable inference.
arXiv Detail & Related papers (2024-05-19T21:53:36Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - A Fast and Stable Marginal-Likelihood Calibration Method with Application to Quantum Characterization [0.0]
We present a marginal likelihood strategy integrated into the Kennedy-O'Hagan (KOH) Bayesian framework.<n>The proposed method is both computationally efficient and numerically stable, even in large dataset regimes.
arXiv Detail & Related papers (2023-08-24T04:39:18Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.<n>This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.<n>We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Improved Accuracy for Trotter Simulations Using Chebyshev Interpolation [0.5729426778193399]
We show how errors due to Trotterized time evolution can be mitigated through the use of algorithmic techniques.
Our approach is to extrapolate to zero Trotter step size, akin to zero-noise extrapolation techniques for mitigating hardware errors.
arXiv Detail & Related papers (2022-12-29T01:21:26Z) - Rigorous dynamical mean field theory for stochastic gradient descent
methods [17.90683687731009]
We prove closed-form equations for the exact high-dimensionals of a family of first order gradient-based methods.
This includes widely used algorithms such as gradient descent (SGD) or Nesterov acceleration.
arXiv Detail & Related papers (2022-10-12T21:10:55Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.