Differentiable quantum-trajectory simulation of Lindblad dynamics for QGP transport-coefficient inference
- URL: http://arxiv.org/abs/2601.14399v1
- Date: Tue, 20 Jan 2026 19:03:37 GMT
- Title: Differentiable quantum-trajectory simulation of Lindblad dynamics for QGP transport-coefficient inference
- Authors: Lukas Heinrich, Tom Magorsch,
- Abstract summary: We study parameter estimation for the transport coefficients of the quark-gluon plasma by differentiating open-quantum-system-based Monte Carlo simulations of quarkonium suppression.<n>The underlying simulator requires solving a Lindblad equation in a large Hilbert space, which makes parameter estimation computationally expensive.<n>We apply the score-function gradient estimator to differentiate through discrete jump sampling in the Monte Carlo wave-function algorithm used to solve the Lindblad equation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study parameter estimation for the transport coefficients of the quark-gluon plasma by differentiating open-quantum-system-based Monte Carlo simulations of quarkonium suppression. The underlying simulator requires solving a Lindblad equation in a large Hilbert space, which makes parameter estimation computationally expensive. We approach the problem using gradient-based optimization. Specifically, we apply the score-function gradient estimator to differentiate through discrete jump sampling in the Monte Carlo wave-function algorithm used to solve the Lindblad equation. The resulting stochastic gradient estimator exhibits sufficiently low variance and can still be estimated in an embarrassingly parallel manner, enabling efficient scaling of the simulations. We implement this gradient estimator in the existing open-source quarkonium suppression code QTraj. To demonstrate its utility for parameter estimation, we infer the two transport coefficients $\hatκ$ and $\hatγ$ using gradient-based optimization on synthetic nuclear modification factor data.
Related papers
- Variational Entropic Optimal Transport [67.76725267984578]
We propose Variational Entropic Optimal Transport (VarEOT) for domain translation problems.<n>VarEOT is based on an exact variational reformulation of the log-partition $log mathbbE[exp(cdot)$ as a tractable generalization over an auxiliary positive normalizer.<n> Experiments on synthetic data and unpaired image-to-image translation demonstrate competitive or improved translation quality.
arXiv Detail & Related papers (2026-02-02T15:48:44Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Semi-Implicit Functional Gradient Flow for Efficient Sampling [30.32233517392456]
We propose a functional gradient ParVI method that uses perturbed particles with Gaussian noise as the approximation family.<n>We show that the corresponding functional gradient flow, which can be estimated via denoising score matching with neural networks, exhibits strong theoretical convergence guarantees.<n>In addition, we present an adaptive version of our method that automatically selects the appropriate noise magnitude during sampling.
arXiv Detail & Related papers (2024-10-23T15:00:30Z) - Maximum a Posteriori Estimation for Linear Structural Dynamics Models Using Bayesian Optimization with Rational Polynomial Chaos Expansions [0.01578888899297715]
We propose an extension to an existing sparse Bayesian learning approach for MAP estimation.
We introduce a Bayesian optimization approach, which allows to adaptively enrich the experimental design.
By combining the sparsity-inducing learning procedure with the experimental design, we effectively reduce the number of model evaluations.
arXiv Detail & Related papers (2024-08-07T06:11:37Z) - Neural Surrogate HMC: Accelerated Hamiltonian Monte Carlo with a Neural Network Surrogate Likelihood [0.0]
We show that some problems can be made tractable by amortizing the computation with a surrogate likelihood function implemented by a neural network.
We show that this has two additional benefits: reducing noise in the likelihood evaluations and providing fast gradient calculations.
arXiv Detail & Related papers (2024-07-29T21:54:57Z) - Leveraging Nested MLMC for Sequential Neural Posterior Estimation with Intractable Likelihoods [0.38233569758620045]
Methods aim to learn the posterior from adaptively proposed simulations using neural network-based conditional density estimators.<n>The automatic posterior transformation (APT) method proposed by Greenberg et al. performs well and scales to high-level runtime data.<n>In this paper, we reformulate APT as a nested estimation problem.<n>We construct several multi- Monte Carlo (MLMC) estimators for the loss function and its gradients to accommodate different scenarios.
arXiv Detail & Related papers (2024-01-30T06:29:41Z) - Unbiased Kinetic Langevin Monte Carlo with Inexact Gradients [0.8749675983608172]
We present an unbiased method for posterior means based on kinetic Langevin dynamics.
Our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem.
Our results demonstrate that in large-scale applications, the unbiased algorithm we present can be 2-3 orders of magnitude more efficient than the gold-standard" randomized Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2023-11-08T21:19:52Z) - Sampling from Gaussian Process Posteriors using Stochastic Gradient
Descent [43.097493761380186]
gradient algorithms are an efficient method of approximately solving linear systems.
We show that gradient descent produces accurate predictions, even in cases where it does not converge quickly to the optimum.
Experimentally, gradient descent achieves state-of-the-art performance on sufficiently large-scale or ill-conditioned regression tasks.
arXiv Detail & Related papers (2023-06-20T15:07:37Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Online Statistical Inference for Stochastic Optimization via
Kiefer-Wolfowitz Methods [8.890430804063705]
We first present the distribution for the Polyak-Ruppert-averaging type Kiefer-Wolfowitz (AKW) estimators.
The distributional result reflects the trade-off between statistical efficiency and function query complexity.
arXiv Detail & Related papers (2021-02-05T19:22:41Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.