Positive-definite parametrization of mixed quantum states with deep
neural networks
- URL: http://arxiv.org/abs/2206.13488v1
- Date: Mon, 27 Jun 2022 17:51:38 GMT
- Title: Positive-definite parametrization of mixed quantum states with deep
neural networks
- Authors: Filippo Vicentini, Riccardo Rossi, Giuseppe Carleo
- Abstract summary: We show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution.
We benchmark this architecture by the steady state of the dissipative transverse-field Ising model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the Gram-Hadamard Density Operator (GHDO), a new deep
neural-network architecture that can encode positive semi-definite density
operators of exponential rank with polynomial resources. We then show how to
embed an autoregressive structure in the GHDO to allow direct sampling of the
probability distribution. These properties are especially important when
representing and variationally optimizing the mixed quantum state of a system
interacting with an environment. Finally, we benchmark this architecture by
simulating the steady state of the dissipative transverse-field Ising model.
Estimating local observables and the R\'enyi entropy, we show significant
improvements over previous state-of-the-art variational approaches.
Related papers
- Variational Encoder-Decoders for Learning Latent Representations of Physical Systems [0.0]
We present a framework for learning data-driven low-dimensional representations of a physical system.
We successfully model the hydraulic pressure response at observation wells of a groundwater flow model.
arXiv Detail & Related papers (2024-12-06T16:46:48Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Neural variational Data Assimilation with Uncertainty Quantification using SPDE priors [28.804041716140194]
Recent advances in the deep learning community enables to address the problem through a neural architecture a variational data assimilation framework.
In this work we use the theory of Partial Differential Equations (SPDE) and Gaussian Processes (GP) to estimate both space-and time covariance of the state.
arXiv Detail & Related papers (2024-02-02T19:18:12Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Subsurface Characterization using Ensemble-based Approaches with Deep
Generative Models [2.184775414778289]
Inverse modeling is limited for ill-posed, high-dimensional applications due to computational costs and poor prediction accuracy with sparse datasets.
We combine Wasserstein Geneversarative Adrial Network with Gradient Penalty (WGAN-GP) and Ensemble Smoother with Multiple Data Assimilation (ES-MDA)
WGAN-GP is trained to generate high-dimensional K fields from a low-dimensional latent space and ES-MDA updates the latent variables by assimilating available measurements.
arXiv Detail & Related papers (2023-10-02T01:27:10Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Ground state search by local and sequential updates of neural network
quantum states [3.3711670942444023]
We propose a local optimization procedure for neural network quantum states.
We analyze both the ground state energy and the correlations for the non-integrable tilted Ising model with restricted Boltzmann machines.
We find that sequential local updates can lead to faster convergence to states which have energy and correlations closer to those of the ground state.
To show the generality of the approach we apply it to both 1D and 2D non-integrable spin systems.
arXiv Detail & Related papers (2022-07-22T05:24:19Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.