SGMM: Stochastic Approximation to Generalized Method of Moments
- URL: http://arxiv.org/abs/2308.13564v2
- Date: Mon, 30 Oct 2023 22:13:18 GMT
- Title: SGMM: Stochastic Approximation to Generalized Method of Moments
- Authors: Xiaohong Chen, Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin,
Myunghyun Song
- Abstract summary: We introduce a new class of algorithms, Generalized Method of Moments (SGMM) for estimation and inference on (overidentified) moment restriction models.
Our SGMM is a novel approximation to the popular Hansen (1982) (offline) GMM, and offers fast and scalable implementation with the ability to handle streaming datasets in real time.
- Score: 8.48870560391056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new class of algorithms, Stochastic Generalized Method of
Moments (SGMM), for estimation and inference on (overidentified) moment
restriction models. Our SGMM is a novel stochastic approximation alternative to
the popular Hansen (1982) (offline) GMM, and offers fast and scalable
implementation with the ability to handle streaming datasets in real time. We
establish the almost sure convergence, and the (functional) central limit
theorem for the inefficient online 2SLS and the efficient SGMM. Moreover, we
propose online versions of the Durbin-Wu-Hausman and Sargan-Hansen tests that
can be seamlessly integrated within the SGMM framework. Extensive Monte Carlo
simulations show that as the sample size increases, the SGMM matches the
standard (offline) GMM in terms of estimation accuracy and gains over
computational efficiency, indicating its practical value for both large-scale
and online datasets. We demonstrate the efficacy of our approach by a proof of
concept using two well known empirical examples with large sample sizes.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Bridging the Usability Gap: Theoretical and Methodological Advances for Spectral Learning of Hidden Markov Models [0.8287206589886879]
The Baum-Welch (B-W) algorithm is the most widely accepted method for inferring hidden Markov models (HMM)
It is prone to getting stuck in local optima, and can be too slow for many real-time applications.
We propose a novel algorithm called projected SHMM (PSHMM) that mitigates the problem of error propagation.
arXiv Detail & Related papers (2023-02-15T02:58:09Z) - Stochastic First-Order Learning for Large-Scale Flexibly Tied Gaussian
Mixture Model [3.4546761246181696]
We propose a new optimization algorithm on the manifold of Gaussian Mixture Models (GMMs)
We observe that methods can outperform the expectation-maximization algorithm in terms of attaining better likelihood, needing fewer epochs for convergence, and consuming less time per each epoch.
arXiv Detail & Related papers (2022-12-11T04:24:52Z) - Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint [30.839245814393724]
A generic method of solving moment conditions is the Generalized Method of Moments (GMM)
We develop a GMM estimator that can tolerate a constant $ell$ recovery guarantee of $O(sqrtepsilon)$.
Our algorithm and assumptions apply to instrumental variables linear and logistic regression.
arXiv Detail & Related papers (2021-10-06T21:06:22Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - Online Covariance Matrix Estimation in Stochastic Gradient Descent [10.153224593032677]
gradient descent (SGD) is widely used for parameter estimation especially for huge data sets and online learning.
This paper aims at quantifying statistical inference of SGD-based estimates in an online setting.
arXiv Detail & Related papers (2020-02-10T17:46:10Z) - Scalable Hybrid HMM with Gaussian Process Emission for Sequential
Time-series Data Clustering [13.845932997326571]
Hidden Markov Model (HMM) combined with Gaussian Process (GP) emission can be effectively used to estimate the hidden state.
This paper proposes a scalable learning method for HMM-GPSM.
arXiv Detail & Related papers (2020-01-07T07:28:21Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z) - Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning [66.18202188565922]
We propose a communication-efficient decentralized machine learning (ML) algorithm, coined QGADMM (QGADMM)
We develop a novel quantization method to adaptively adjust modelization levels and their probabilities, while proving the convergence of QGADMM for convex functions.
arXiv Detail & Related papers (2019-10-23T10:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.