A Statistical Decision-Theoretical Perspective on the Two-Stage Approach
to Parameter Estimation
- URL: http://arxiv.org/abs/2204.00036v1
- Date: Thu, 31 Mar 2022 18:19:47 GMT
- Title: A Statistical Decision-Theoretical Perspective on the Two-Stage Approach
to Parameter Estimation
- Authors: Braghadeesh Lakshminarayanan, Cristian R. Rojas
- Abstract summary: Two-Stage (TS) Approach can be applied to obtain reliable parametric estimates.
We show how to apply the TS approach on models for independent and identically distributed samples.
- Score: 7.599399338954307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most important problems in system identification and statistics is
how to estimate the unknown parameters of a given model. Optimization methods
and specialized procedures, such as Empirical Minimization (EM) can be used in
case the likelihood function can be computed. For situations where one can only
simulate from a parametric model, but the likelihood is difficult or impossible
to evaluate, a technique known as the Two-Stage (TS) Approach can be applied to
obtain reliable parametric estimates. Unfortunately, there is currently a lack
of theoretical justification for TS. In this paper, we propose a statistical
decision-theoretical derivation of TS, which leads to Bayesian and Minimax
estimators. We also show how to apply the TS approach on models for independent
and identically distributed samples, by computing quantiles of the data as a
first step, and using a linear function as the second stage. The proposed
method is illustrated via numerical simulations.
Related papers
- Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Learning Robust Statistics for Simulation-based Inference under Model
Misspecification [23.331522354991527]
We propose the first general approach to handle model misspecification that works across different classes of simulation-based inference methods.
We show that our method yields robust inference in misspecified scenarios, whilst still being accurate when the model is well-specified.
arXiv Detail & Related papers (2023-05-25T09:06:26Z) - Misspecification-robust Sequential Neural Likelihood for
Simulation-based Inference [0.20971479389679337]
We propose a novel SNL method, which through the incorporation of additional adjustment parameters, is robust to model misspecification.
We demonstrate the efficacy of our approach through several illustrative examples.
arXiv Detail & Related papers (2023-01-31T02:28:18Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Score Matched Conditional Exponential Families for Likelihood-Free
Inference [0.0]
Likelihood-Free Inference (LFI) relies on simulations from the model.
We generate parameter-simulation pairs from the model independently on the observation.
We use Neural Networks whose weights are tuned with Score Matching to learn a conditional exponential family likelihood approximation.
arXiv Detail & Related papers (2020-12-20T11:57:30Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.