Large-scale Lindblad learning from time-series data
- URL: http://arxiv.org/abs/2512.08165v1
- Date: Tue, 09 Dec 2025 01:50:14 GMT
- Title: Large-scale Lindblad learning from time-series data
- Authors: Ewout van den Berg, Brad Mitchell, Ken Xuan Wei, Moein Malekakhlagh,
- Abstract summary: We develop a protocol for learning a time-independent Lindblad model for operations that can be applied repeatedly on a quantum computer.<n>We demonstrate the approach by learning the Lindbladian for a full layer of gates on a 156-qubit superconducting quantum processor.
- Score: 0.1749935196721634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we develop a protocol for learning a time-independent Lindblad model for operations that can be applied repeatedly on a quantum computer. The protocol is highly scalable for models with local interactions and is in principle insensitive to state-preparation errors. At its core, the protocol forms a linear system of equations for the model parameters in terms of a set of observable values and their gradients. The required gradient information is obtained by fitting time-series data with sums of exponentially damped sinusoids and differentiating those curves. We develop a robust curve-fitting procedure that finds the most parsimonious representation of the data up to shot noise. We demonstrate the approach by learning the Lindbladian for a full layer of gates on a 156-qubit superconducting quantum processor, providing the first learning experiment of this kind. We study the effects of state-preparation and measurement errors and limitations on the operations that can be learned. For improved performance under readout errors, we propose an optional fine-tuning strategy that improves the fit between the time-evolved model and the measured data.
Related papers
- Can Small Training Runs Reliably Guide Data Curation? Rethinking Proxy-Model Practice [109.9635246405237]
We show that the experiment conclusions about data quality can flip with even minor adjustments to training hyper parameters.<n>We introduce a simple patch to the evaluation protocol: using reduced learning rates for proxy model training.<n> Empirically, we validate this approach across 23 data recipes covering four critical dimensions of data curation.
arXiv Detail & Related papers (2025-12-30T23:02:44Z) - Nonparametric Data Attribution for Diffusion Models [57.820618036556084]
Data attribution for generative models seeks to quantify the influence of individual training examples on model outputs.<n>We propose a nonparametric attribution method that operates entirely on data, measuring influence via patch-level similarity between generated and training images.
arXiv Detail & Related papers (2025-10-16T03:37:16Z) - Robust Lindbladian Estimation for Quantum Dynamics [0.0]
We revisit the problem of fitting Lindbladian models to the outputs of quantum process tomography.<n>We introduce algorithmic improvements to logarithm search, demonstrating that it can be applied in practice to settings relevant for current quantum computing hardware.<n>We additionally augment the task of Lindbladian fitting with techniques from gate set tomography to improve robustness against state preparation and measurement errors.
arXiv Detail & Related papers (2025-07-10T16:45:37Z) - Episodic Gaussian Process-Based Learning Control with Vanishing Tracking
Errors [10.627020714408445]
We develop an episodic approach for learning GP models, such that an arbitrary tracking accuracy can be guaranteed.
The effectiveness of the derived theory is demonstrated in several simulations.
arXiv Detail & Related papers (2023-07-10T08:43:28Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Transfer learning of phase transitions in percolation and directed
percolation [2.0342076109301583]
We apply domain adversarial neural network (DANN) based on transfer learning to studying non-equilibrium and equilibrium phase transition models.
The DANN learning of both models yields reliable results which are comparable to the ones from Monte Carlo simulations.
arXiv Detail & Related papers (2021-12-31T15:24:09Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Adjusting for Autocorrelated Errors in Neural Networks for Time Series
Regression and Forecasting [10.659189276058948]
We learn the autocorrelation coefficient jointly with the model parameters in order to adjust for autocorrelated errors.
For time series regression, large-scale experiments indicate that our method outperforms the Prais-Winsten method.
Results across a wide range of real-world datasets show that our method enhances performance in almost all cases.
arXiv Detail & Related papers (2021-01-28T04:25:51Z) - Computer Model Calibration with Time Series Data using Deep Learning and
Quantile Regression [1.6758573326215689]
The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data.
We propose a new calibration framework based on a deep neural network (DNN) with long-short term memory layers that directly emulates the inverse relationship between the model output and input parameters.
arXiv Detail & Related papers (2020-08-29T22:18:41Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.