Learning Gaussian Graphical Models via Multiplicative Weights
- URL: http://arxiv.org/abs/2002.08663v2
- Date: Tue, 25 Feb 2020 03:07:45 GMT
- Title: Learning Gaussian Graphical Models via Multiplicative Weights
- Authors: Anamay Chaturvedi and Jonathan Scarlett
- Abstract summary: We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
- Score: 54.252053139374205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphical model selection in Markov random fields is a fundamental problem in
statistics and machine learning. Two particularly prominent models, the Ising
model and Gaussian model, have largely developed in parallel using different
(though often related) techniques, and several practical algorithms with
rigorous sample complexity bounds have been established for each. In this
paper, we adapt a recently proposed algorithm of Klivans and Meka (FOCS, 2017),
based on the method of multiplicative weight updates, from the Ising model to
the Gaussian model, via non-trivial modifications to both the algorithm and its
analysis. The algorithm enjoys a sample complexity bound that is qualitatively
similar to others in the literature, has a low runtime $O(mp^2)$ in the case of
$m$ samples and $p$ nodes, and can trivially be implemented in an online
manner.
Related papers
- Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games [66.2085181793014]
We show that a model-free stage-based Q-learning algorithm can enjoy the same optimality in the $H$ dependence as model-based algorithms.
Our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions.
arXiv Detail & Related papers (2023-08-17T08:34:58Z) - Large-scale Bayesian Structure Learning for Gaussian Graphical Models using Marginal Pseudo-likelihood [0.26249027950824516]
We introduce two novel Markov chain Monte Carlo (MCMC) search algorithms with a significantly lower computational cost than leading Bayesian approaches.
These algorithms can deliver reliable results in mere minutes on standard computers, even for large-scale problems with one thousand variables.
We also illustrate the practical utility of our methods on medium and large-scale applications from human and mice gene expression studies.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - Scaling up the self-optimization model by means of on-the-fly
computation of weights [0.8057006406834467]
This work introduces a novel implementation of the Self-Optimization (SO) model that scales as $mathcalOleft(N2right)$ with respect to the number of nodes $N$.
Our on-the-fly computation paves the way for investigating substantially larger system sizes, allowing for more variety and complexity in future studies.
arXiv Detail & Related papers (2022-11-03T10:51:25Z) - Optimal estimation of Gaussian DAG models [14.240183323622288]
We study the optimal sample complexity of learning a Gaussian directed acyclic graph (DAG) from observational data.
Our results also extend to more general identification assumptions as well as subgaussian errors.
arXiv Detail & Related papers (2022-01-25T18:56:56Z) - Learning Linear Models Using Distributed Iterative Hessian Sketching [4.567810220723372]
We consider the problem of learning the Markov parameters of a linear system from observed data.
We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $epsilon$-optimal solutions.
arXiv Detail & Related papers (2021-12-08T04:07:23Z) - Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with
Plug-in Solver [32.212146650873194]
We provide approaches to learn an RL model efficiently without the guidance of a reward signal.
In particular, we take a plug-in solver approach, where we focus on learning a model in the exploration phase.
We show that, by establishing a novel exploration algorithm, the plug-in approach learns a model by taking $tildeO(d2H3/epsilon2)$ interactions with the environment.
arXiv Detail & Related papers (2021-10-07T07:59:50Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.