Solution Path of Time-varying Markov Random Fields with Discrete
Regularization
- URL: http://arxiv.org/abs/2307.13750v1
- Date: Tue, 25 Jul 2023 18:19:50 GMT
- Title: Solution Path of Time-varying Markov Random Fields with Discrete
Regularization
- Authors: Salar Fattahi and Andres Gomez
- Abstract summary: We study the problem of inferring time-varying random fields (MRFs) with different discrete and temporal regularizations on the parameters.
We show that the entire solution for all sparsity levels can be obtained in $mathcalO(Tp3)$, where $T$ is the number of time steps and $p$ is the number of unknown parameters at any given time.
- Score: 7.79230326339002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of inferring sparse time-varying Markov random fields
(MRFs) with different discrete and temporal regularizations on the parameters.
Due to the intractability of discrete regularization, most approaches for
solving this problem rely on the so-called maximum-likelihood estimation (MLE)
with relaxed regularization, which neither results in ideal statistical
properties nor scale to the dimensions encountered in realistic settings. In
this paper, we address these challenges by departing from the MLE paradigm and
resorting to a new class of constrained optimization problems with exact,
discrete regularization to promote sparsity in the estimated parameters.
Despite the nonconvex and discrete nature of our formulation, we show that it
can be solved efficiently and parametrically for all sparsity levels. More
specifically, we show that the entire solution path of the time-varying MRF for
all sparsity levels can be obtained in $\mathcal{O}(pT^3)$, where $T$ is the
number of time steps and $p$ is the number of unknown parameters at any given
time. The efficient and parametric characterization of the solution path
renders our approach highly suitable for cross-validation, where parameter
estimation is required for varying regularization values. Despite its
simplicity and efficiency, we show that our proposed approach achieves provably
small estimation error for different classes of time-varying MRFs, namely
Gaussian and discrete MRFs, with as few as one sample per time. Utilizing our
algorithm, we can recover the complete solution path for instances of
time-varying MRFs featuring over 30 million variables in less than 12 minutes
on a standard laptop computer. Our code is available at
\url{https://sites.google.com/usc.edu/gomez/data}.
Related papers
- Fast Screening Rules for Optimal Design via Quadratic Lasso
Reformulation [0.135975510645475]
In this work, we derive safe screening rules that can be used to discard inessential samples.
The new tests are much faster to compute, especially for problems involving a parameter space of high dimension.
We show how an existing homotopy algorithm to compute the regularization path of the lasso method can be reparametrized with respect to the squared $ell_$-penalty.
arXiv Detail & Related papers (2023-10-13T08:10:46Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Robust Multi-view Registration of Point Sets with Laplacian Mixture
Model [25.865100974015412]
We propose a novel probabilistic generative method to align multiple point sets based on the heavy-tailed Laplacian distribution.
We demonstrate the advantages of our method by comparing it with representative state-of-the-art approaches on benchmark challenging data sets.
arXiv Detail & Related papers (2021-10-26T14:49:09Z) - Distributionally Robust Optimization with Markovian Data [8.126833795693699]
We study a program where the probability distribution of the uncertain problem parameters is unknown.
We propose a data-driven distributionally to estimate the problem's objective function and optimal solution.
arXiv Detail & Related papers (2021-06-12T10:59:02Z) - Scalable Inference of Sparsely-changing Markov Random Fields with Strong
Statistical Guarantees [10.127456032874978]
We introduce a new class of constrained optimization problems for the inference of sparsely-changing MRFs.
Our method is extremely efficient in practice: it can accurately estimate sparsely-changing graphical models with more than 500 million variables in less than one hour.
arXiv Detail & Related papers (2021-02-06T13:53:00Z) - The Variational Method of Moments [65.91730154730905]
conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables.
Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem.
We provide algorithms for valid statistical inference based on the same kind of variational reformulations.
arXiv Detail & Related papers (2020-12-17T07:21:06Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.