Large-Scale Shrinkage Estimation under Markovian Dependence
- URL: http://arxiv.org/abs/2003.01873v2
- Date: Thu, 12 Mar 2020 23:25:16 GMT
- Title: Large-Scale Shrinkage Estimation under Markovian Dependence
- Authors: Bowen Gang, Gourab Mukherjee and Wenguang Sun
- Abstract summary: We consider the problem of simultaneous estimation of a sequence of dependent parameters that are generated from a hidden Markov model.
We study the roles of statistical shrinkage for improved estimation of these dependent parameters.
Our proposed method elegantly combines non-parametric shrinkage ideas with efficient estimation of the hidden states under Markovian dependence.
- Score: 0.348062676775249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of simultaneous estimation of a sequence of dependent
parameters that are generated from a hidden Markov model. Based on observing a
noise contaminated vector of observations from such a sequence model, we
consider simultaneous estimation of all the parameters irrespective of their
hidden states under square error loss. We study the roles of statistical
shrinkage for improved estimation of these dependent parameters. Being
completely agnostic on the distributional properties of the unknown underlying
Hidden Markov model, we develop a novel non-parametric shrinkage algorithm. Our
proposed method elegantly combines \textit{Tweedie}-based non-parametric
shrinkage ideas with efficient estimation of the hidden states under Markovian
dependence. Based on extensive numerical experiments, we establish superior
performance our our proposed algorithm compared to non-shrinkage based
state-of-the-art parametric as well as non-parametric algorithms used in hidden
Markov models. We provide decision theoretic properties of our methodology and
exhibit its enhanced efficacy over popular shrinkage methods built under
independence. We demonstrate the application of our methodology on real-world
datasets for analyzing of temporally dependent social and economic indicators
such as search trends and unemployment rates as well as estimating spatially
dependent Copy Number Variations.
Related papers
- Sample-efficient neural likelihood-free Bayesian inference of implicit HMMs [1.8843687952462742]
We propose a novel, sample-efficient likelihood-free method for estimating the high-dimensional hidden states of an implicit HMM.
Our approach relies on learning directly the intractable posterior distribution of the hidden states, using an autoregressive-flow, by exploiting the Markov property.
arXiv Detail & Related papers (2024-05-02T21:13:34Z) - Leveraging Self-Consistency for Data-Efficient Amortized Bayesian Inference [9.940560505044122]
We propose a method to improve the efficiency and accuracy of amortized Bayesian inference.
We estimate the marginal likelihood based on approximate representations of the joint model.
arXiv Detail & Related papers (2023-10-06T17:41:41Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Convergence of uncertainty estimates in Ensemble and Bayesian sparse
model discovery [4.446017969073817]
We show empirical success in terms of accuracy and robustness to noise with bootstrapping-based sequential thresholding least-squares estimator.
We show that this bootstrapping-based ensembling technique can perform a provably correct variable selection procedure with an exponential convergence rate of the error rate.
arXiv Detail & Related papers (2023-01-30T04:07:59Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Causal Modeling with Stochastic Confounders [11.881081802491183]
This work extends causal inference with confounders.
We propose a new approach to variational estimation for causal inference based on a representer theorem with a random input space.
arXiv Detail & Related papers (2020-04-24T00:34:44Z) - Bayesian System ID: Optimal management of parameter, model, and
measurement uncertainty [0.0]
We evaluate the robustness of a probabilistic formulation of system identification (ID) to sparse, noisy, and indirect data.
We show that the log posterior has improved geometric properties compared with the objective function surfaces of traditional methods.
arXiv Detail & Related papers (2020-03-04T22:48:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.