A machine learning approach to Bayesian parameter estimation
- URL: http://arxiv.org/abs/2006.02369v3
- Date: Tue, 21 Sep 2021 11:56:48 GMT
- Title: A machine learning approach to Bayesian parameter estimation
- Authors: Samuel P. Nolan, Augusto Smerzi and Luca Pezz\`e
- Abstract summary: We formulate parameter estimation as a classification task and use artificial neural networks to efficiently perform Bayesian estimation.
We show that the network's posterior distribution is centered at the true (unknown) value of the parameter within an uncertainty given by the inverse Fisher information.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian estimation is a powerful theoretical paradigm for the operation of
quantum sensors. However, the Bayesian method for statistical inference
generally suffers from demanding calibration requirements that have so far
restricted its use to proof-of-principle experiments. In this theoretical
study, we formulate parameter estimation as a classification task and use
artificial neural networks to efficiently perform Bayesian estimation. We show
that the network's posterior distribution is centered at the true (unknown)
value of the parameter within an uncertainty given by the inverse Fisher
information, representing the ultimate sensitivity limit for the given
apparatus. When only a limited number of calibration measurements are
available, our machine-learning based procedure outperforms standard
calibration methods. Thus, our work paves the way for Bayesian quantum sensors
which can benefit from efficient optimization methods, such as in adaptive
schemes, and take advantage of complex non-classical states. These capabilities
can significantly enhance the sensitivity of future devices.
Related papers
- Benchmarking Bayesian quantum estimation [0.0]
This work focuses on the benchmarking of protocols implementing Bayesian estimations.
By comparing different figures of merits, evidence is provided in favor of using the median of the quadratic error in the estimations.
These results find natural applications to practical problems within the quantum estimation framework.
arXiv Detail & Related papers (2024-01-26T14:29:31Z) - Function-Space Regularization in Neural Networks: A Probabilistic
Perspective [51.133793272222874]
We show that we can derive a well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.
We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection and highly-calibrated predictive uncertainty estimates.
arXiv Detail & Related papers (2023-12-28T17:50:56Z) - Model-aware reinforcement learning for high-performance Bayesian
experimental design in quantum metrology [0.5461938536945721]
Quantum sensors offer control flexibility during estimation by allowing manipulation by the experimenter across various parameters.
We introduce a versatile procedure capable of optimizing a wide range of problems in quantum metrology, estimation, and hypothesis testing.
We combine model-aware reinforcement learning (RL) with Bayesian estimation based on particle filtering.
arXiv Detail & Related papers (2023-12-28T12:04:15Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - Machine Learning Based Parameter Estimation of Gaussian Quantum States [14.85374185122389]
We propose a machine learning framework for parameter estimation of single mode Gaussian quantum states.
Under a Bayesian framework, our approach estimates parameters of suitable prior distributions from measured data.
arXiv Detail & Related papers (2021-08-13T04:59:16Z) - Frequentist Parameter Estimation with Supervised Learning [0.0]
We use regression to infer a machine-learned point estimate of an unknown parameter.
When the number of training measurements are large, this is identical to the well-known maximum-likelihood estimator (MLE)
We show that the machine-learned estimator inherits the desirable properties of the MLE, up to a limit imposed by the resolution of the training grid.
arXiv Detail & Related papers (2021-05-26T02:24:25Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Diagnosing Imperfections in Quantum Sensors via Generalized Cram\'er-Rao
Bounds [0.0]
We show that the third-order absolute moment can give a superior capability in revealing biases in the estimation, compared to standard approaches.
Our studies point to the identification of an alternative strategy that brings a possible advantage in monitoring the correct operation of high-precision sensors.
arXiv Detail & Related papers (2020-01-07T08:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.