Scalable Inference of Sparsely-changing Markov Random Fields with Strong
Statistical Guarantees
- URL: http://arxiv.org/abs/2102.03585v1
- Date: Sat, 6 Feb 2021 13:53:00 GMT
- Title: Scalable Inference of Sparsely-changing Markov Random Fields with Strong
Statistical Guarantees
- Authors: Salar Fattahi and Andres Gomez
- Abstract summary: We introduce a new class of constrained optimization problems for the inference of sparsely-changing MRFs.
Our method is extremely efficient in practice: it can accurately estimate sparsely-changing graphical models with more than 500 million variables in less than one hour.
- Score: 10.127456032874978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of inferring time-varying Markov random
fields (MRF), where the underlying graphical model is both sparse and changes
sparsely over time. Most of the existing methods for the inference of
time-varying MRFs rely on the regularized maximum likelihood estimation (MLE),
that typically suffer from weak statistical guarantees and high computational
time. Instead, we introduce a new class of constrained optimization problems
for the inference of sparsely-changing MRFs. The proposed optimization problem
is formulated based on the exact $\ell_0$ regularization, and can be solved in
near-linear time and memory. Moreover, we show that the proposed estimator
enjoys a provably small estimation error. As a special case, we derive sharp
statistical guarantees for the inference of sparsely-changing Gaussian MRFs
(GMRF) in the high-dimensional regime, showing that such problems can be
learned with as few as one sample per time. Our proposed method is extremely
efficient in practice: it can accurately estimate sparsely-changing graphical
models with more than 500 million variables in less than one hour.
Related papers
- Bypassing the Noisy Parity Barrier: Learning Higher-Order Markov Random Fields from Dynamics [21.976109703401114]
We consider the problem of learning graphical models, also known as Markov random fields (MRFs) from temporally correlated samples.
In particular, we show that given a trajectory with $widetildeO_k(n)$ site updates of an order $k$ MRF from the Glauber dynamics, there is an algorithm that recovers the graph and the parameters in $widetildeO_k(n2)$ time.
Our results thus surprisingly show that this more realistic, but intuitively less tractable, model for MRFs actually leads to efficiency far beyond what
arXiv Detail & Related papers (2024-09-09T02:32:45Z) - A sparse PAC-Bayesian approach for high-dimensional quantile prediction [0.0]
This paper presents a novel probabilistic machine learning approach for high-dimensional quantile prediction.
It uses a pseudo-Bayesian framework with a scaled Student-t prior and Langevin Monte Carlo for efficient computation.
Its effectiveness is validated through simulations and real-world data, where it performs competitively against established frequentist and Bayesian techniques.
arXiv Detail & Related papers (2024-09-03T08:01:01Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Solution Path of Time-varying Markov Random Fields with Discrete
Regularization [7.79230326339002]
We study the problem of inferring time-varying random fields (MRFs) with different discrete and temporal regularizations on the parameters.
We show that the entire solution for all sparsity levels can be obtained in $mathcalO(Tp3)$, where $T$ is the number of time steps and $p$ is the number of unknown parameters at any given time.
arXiv Detail & Related papers (2023-07-25T18:19:50Z) - Estimation Beyond Data Reweighting: Kernel Method of Moments [9.845144212844662]
We provide an empirical likelihood estimator based on maximum mean discrepancy which we term the kernel method of moments (KMM)
We show that our method achieves competitive performance on several conditional moment restriction tasks.
arXiv Detail & Related papers (2023-05-18T11:52:43Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - The Variational Method of Moments [65.91730154730905]
conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables.
Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem.
We provide algorithms for valid statistical inference based on the same kind of variational reformulations.
arXiv Detail & Related papers (2020-12-17T07:21:06Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.