Scalable Inference of Sparsely-changing Markov Random Fields with Strong
Statistical Guarantees
- URL: http://arxiv.org/abs/2102.03585v1
- Date: Sat, 6 Feb 2021 13:53:00 GMT
- Title: Scalable Inference of Sparsely-changing Markov Random Fields with Strong
Statistical Guarantees
- Authors: Salar Fattahi and Andres Gomez
- Abstract summary: We introduce a new class of constrained optimization problems for the inference of sparsely-changing MRFs.
Our method is extremely efficient in practice: it can accurately estimate sparsely-changing graphical models with more than 500 million variables in less than one hour.
- Score: 10.127456032874978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of inferring time-varying Markov random
fields (MRF), where the underlying graphical model is both sparse and changes
sparsely over time. Most of the existing methods for the inference of
time-varying MRFs rely on the regularized maximum likelihood estimation (MLE),
that typically suffer from weak statistical guarantees and high computational
time. Instead, we introduce a new class of constrained optimization problems
for the inference of sparsely-changing MRFs. The proposed optimization problem
is formulated based on the exact $\ell_0$ regularization, and can be solved in
near-linear time and memory. Moreover, we show that the proposed estimator
enjoys a provably small estimation error. As a special case, we derive sharp
statistical guarantees for the inference of sparsely-changing Gaussian MRFs
(GMRF) in the high-dimensional regime, showing that such problems can be
learned with as few as one sample per time. Our proposed method is extremely
efficient in practice: it can accurately estimate sparsely-changing graphical
models with more than 500 million variables in less than one hour.
Related papers
- Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation
for Time Series [49.992908221544624]
Time series data often exhibit numerous missing values, which is the time series imputation task.
Previous deep learning methods have been shown to be effective for time series imputation.
We propose a non-generative time series imputation method that produces accurate imputations with inherent uncertainty.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Solution Path of Time-varying Markov Random Fields with Discrete
Regularization [7.79230326339002]
We study the problem of inferring time-varying random fields (MRFs) with different discrete and temporal regularizations on the parameters.
We show that the entire solution for all sparsity levels can be obtained in $mathcalO(Tp3)$, where $T$ is the number of time steps and $p$ is the number of unknown parameters at any given time.
arXiv Detail & Related papers (2023-07-25T18:19:50Z) - Estimation Beyond Data Reweighting: Kernel Method of Moments [9.845144212844662]
We provide an empirical likelihood estimator based on maximum mean discrepancy which we term the kernel method of moments (KMM)
We show that our method achieves competitive performance on several conditional moment restriction tasks.
arXiv Detail & Related papers (2023-05-18T11:52:43Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - A fast asynchronous MCMC sampler for sparse Bayesian inference [10.535140830570256]
We propose a very fast approximate Markov Chain Monte Carlo (MCMC) sampling framework that is applicable to a large class of sparse Bayesian inference problems.
We show that in high-dimensional linear regression problems, the Markov chain generated by the proposed algorithm admits an invariant distribution that recovers correctly the main signal.
arXiv Detail & Related papers (2021-08-14T02:20:49Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - The Variational Method of Moments [65.91730154730905]
conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables.
Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem.
We provide algorithms for valid statistical inference based on the same kind of variational reformulations.
arXiv Detail & Related papers (2020-12-17T07:21:06Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Differentially Private (Gradient) Expectation Maximization Algorithm
with Statistical Guarantees [25.996994681543903]
(Gradient) Expectation Maximization (EM) is a widely used algorithm for estimating the maximum likelihood of mixture models or incomplete data problems.
Previous research on this problem has already lead to the discovery of some Differentially Private (DP) algorithms for (Gradient) EM.
We propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.
arXiv Detail & Related papers (2020-10-22T03:41:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.