Towards black-box parameter estimation
- URL: http://arxiv.org/abs/2303.15041v2
- Date: Mon, 19 Feb 2024 11:19:12 GMT
- Title: Towards black-box parameter estimation
- Authors: Amanda Lenzi and Haavard Rue
- Abstract summary: We develop new black-box procedures to estimate parameters of statistical models based on weak parameter structure assumptions.
For well-structured likelihoods with frequent occurrences, this is achieved by pre-training a deep neural network on an extensive simulated database.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning algorithms have recently shown to be a successful tool in
estimating parameters of statistical models for which simulation is easy, but
likelihood computation is challenging. But the success of these approaches
depends on simulating parameters that sufficiently reproduce the observed data,
and, at present, there is a lack of efficient methods to produce these
simulations. We develop new black-box procedures to estimate parameters of
statistical models based only on weak parameter structure assumptions. For
well-structured likelihoods with frequent occurrences, such as in time series,
this is achieved by pre-training a deep neural network on an extensive
simulated database that covers a wide range of data sizes. For other types of
complex dependencies, an iterative algorithm guides simulations to the correct
parameter region in multiple rounds. These approaches can successfully estimate
and quantify the uncertainty of parameters from non-Gaussian models with
complex spatial and temporal dependencies. The success of our methods is a
first step towards a fully flexible automatic black-box estimation framework.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Neural Likelihood Approximation for Integer Valued Time Series Data [0.0]
We construct a neural likelihood approximation that can be trained using unconditional simulation of the underlying model.
We demonstrate our method by performing inference on a number of ecological and epidemiological models.
arXiv Detail & Related papers (2023-10-19T07:51:39Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Online estimation methods for irregular autoregressive models [0.0]
Currently available methods for addressing this problem, the so-called online learning methods, use current parameter estimations and novel data to update the estimators.
In this work we consider three online learning algorithms for parameters estimation in the context of time series models.
arXiv Detail & Related papers (2023-01-31T19:52:04Z) - Embed and Emulate: Learning to estimate parameters of dynamical systems
with uncertainty quantification [11.353411236854582]
This paper explores learning emulators for parameter estimation with uncertainty estimation of high-dimensional dynamical systems.
Our task is to accurately estimate a range of likely values of the underlying parameters.
On a coupled 396-dimensional multiscale Lorenz 96 system, our method significantly outperforms a typical parameter estimation method.
arXiv Detail & Related papers (2022-11-03T01:59:20Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.