Theoretical guarantees for neural estimators in parametric statistics
- URL: http://arxiv.org/abs/2506.18508v1
- Date: Mon, 23 Jun 2025 11:02:08 GMT
- Title: Theoretical guarantees for neural estimators in parametric statistics
- Authors: Almut Rödder, Manuel Hentschel, Sebastian Engelke,
- Abstract summary: We study the risk of neural estimators by decomposing it into several terms that can be analyzed separately.<n>We formulate easy-to-check assumptions ensuring that each term converges to zero, and we verify them for popular applications of neural estimators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural estimators are simulation-based estimators for the parameters of a family of statistical models, which build a direct mapping from the sample to the parameter vector. They benefit from the versatility of available network architectures and efficient training methods developed in the field of deep learning. Neural estimators are amortized in the sense that, once trained, they can be applied to any new data set with almost no computational cost. While many papers have shown very good performance of these methods in simulation studies and real-world applications, so far no statistical guarantees are available to support these observations theoretically. In this work, we study the risk of neural estimators by decomposing it into several terms that can be analyzed separately. We formulate easy-to-check assumptions ensuring that each term converges to zero, and we verify them for popular applications of neural estimators. Our results provide a general recipe to derive theoretical guarantees also for broader classes of architectures and estimation problems.
Related papers
- Meta-Statistical Learning: Supervised Learning of Statistical Inference [59.463430294611626]
This work demonstrates that the tools and principles driving the success of large language models (LLMs) can be repurposed to tackle distribution-level tasks.<n>We propose meta-statistical learning, a framework inspired by multi-instance learning that reformulates statistical inference tasks as supervised learning problems.
arXiv Detail & Related papers (2025-02-17T18:04:39Z) - Estimating Koopman operators with sketching to provably learn large
scale dynamical systems [37.18243295790146]
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems.
We boost the efficiency of different kernel-based Koopman operator estimators using random projections.
We establish non error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency.
arXiv Detail & Related papers (2023-06-07T15:30:03Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Nonparametric likelihood-free inference with Jensen-Shannon divergence
for simulator-based models with categorical output [1.4298334143083322]
Likelihood-free inference for simulator-based statistical models has attracted a surge of interest, both in the machine learning and statistics communities.
Here we derive a set of theoretical results to enable estimation, hypothesis testing and construction of confidence intervals for model parameters using computation properties of the Jensen-Shannon- divergence.
Such approximation offers a rapid alternative to more-intensive approaches and can be attractive for diverse applications of simulator-based models.
arXiv Detail & Related papers (2022-05-22T18:00:13Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z) - Statistical Guarantees for Regularized Neural Networks [4.254099382808598]
We develop a general statistical guarantee for estimators that consist of a least-squares term and a regularizer.
Our results establish a mathematical basis for regularized estimation of neural networks.
arXiv Detail & Related papers (2020-05-30T15:28:47Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.