Sample-efficient estimation of entanglement entropy through supervised
learning
- URL: http://arxiv.org/abs/2309.07556v2
- Date: Wed, 3 Jan 2024 11:12:51 GMT
- Title: Sample-efficient estimation of entanglement entropy through supervised
learning
- Authors: Maximilian Rieger, Moritz Reh, Martin G\"arttner
- Abstract summary: We put a particular focus on estimating aleatoric and epistemic uncertainty of the network's estimate.
We observe convergence in a regime of sample sizes in which the baseline method fails to give correct estimates.
As a further application of our method, highly relevant for quantum simulation experiments, we estimate the quantum mutual information for non-unitary evolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore a supervised machine learning approach to estimate the
entanglement entropy of multi-qubit systems from few experimental samples. We
put a particular focus on estimating both aleatoric and epistemic uncertainty
of the network's estimate and benchmark against the best known conventional
estimation algorithms. For states that are contained in the training
distribution, we observe convergence in a regime of sample sizes in which the
baseline method fails to give correct estimates, while extrapolation only seems
possible for regions close to the training regime. As a further application of
our method, highly relevant for quantum simulation experiments, we estimate the
quantum mutual information for non-unitary evolution by training our model on
different noise strengths.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Uncertainty Estimation in Instance Segmentation with Star-convex Shapes [4.197316670989004]
Deep neural network-based algorithms often exhibit incorrect predictions with unwarranted confidence levels.
Our study addresses the challenge of estimating spatial certainty with the location of instances with star- shapes.
Our study demonstrates that combining fractional certainty estimation over individual certainty scores is an effective strategy.
arXiv Detail & Related papers (2023-09-19T10:49:33Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Compositional Score Modeling for Simulation-based Inference [28.422049267537965]
We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches.
Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods.
arXiv Detail & Related papers (2022-09-28T17:08:31Z) - Uncertainty Quantification for Traffic Forecasting: A Unified Approach [21.556559649467328]
Uncertainty is an essential consideration for time series forecasting tasks.
In this work, we focus on quantifying the uncertainty of traffic forecasting.
We develop Deep S-Temporal Uncertainty Quantification (STUQ), which can estimate both aleatoric and relational uncertainty.
arXiv Detail & Related papers (2022-08-11T15:21:53Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Lyapunov-Based Reinforcement Learning State Estimator [9.356469388299928]
We consider the state estimation problem for nonlinear discrete-time systems.
We combine Lyapunov's method in control theory and deep reinforcement learning to design the state estimator.
An actor-critic reinforcement learning algorithm is proposed to learn the state estimator approximated by a deep neural network.
arXiv Detail & Related papers (2020-10-26T12:38:09Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.