Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block
Decomposition
- URL: http://arxiv.org/abs/2309.13405v3
- Date: Fri, 29 Sep 2023 12:50:55 GMT
- Title: Learning Large-Scale MTP$_2$ Gaussian Graphical Models via Bridge-Block
Decomposition
- Authors: Xiwen Wang, Jiaxi Ying, Daniel P. Palomar
- Abstract summary: We show that the entire problem can be equivalently optimized through several smaller-scaled sub-problems.
From practical aspect, this simple and provable discipline can be applied to break down a large problem into small tractable ones.
- Score: 15.322124183968635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the problem of learning the large-scale Gaussian graphical
models that are multivariate totally positive of order two ($\text{MTP}_2$). By
introducing the concept of bridge, which commonly exists in large-scale sparse
graphs, we show that the entire problem can be equivalently optimized through
(1) several smaller-scaled sub-problems induced by a \emph{bridge-block
decomposition} on the thresholded sample covariance graph and (2) a set of
explicit solutions on entries corresponding to bridges. From practical aspect,
this simple and provable discipline can be applied to break down a large
problem into small tractable ones, leading to enormous reduction on the
computational complexity and substantial improvements for all existing
algorithms. The synthetic and real-world experiments demonstrate that our
proposed method presents a significant speed-up compared to the
state-of-the-art benchmarks.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Learning Linear Models Using Distributed Iterative Hessian Sketching [4.567810220723372]
We consider the problem of learning the Markov parameters of a linear system from observed data.
We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $epsilon$-optimal solutions.
arXiv Detail & Related papers (2021-12-08T04:07:23Z) - Gaussian Graphical Model Selection for Huge Data via Minipatch Learning [1.2891210250935146]
We propose the Minipatch Graph (MPGraph) estimator to solve the problem of graphical model selection.
MPGraph is a generalization of thresholded graph estimators fit to tiny, random subsets of both the observations and the nodes.
We prove that our algorithm achieves finite-sample graph selection consistency.
arXiv Detail & Related papers (2021-10-22T21:06:48Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - PCA Reduced Gaussian Mixture Models with Applications in Superresolution [1.885698488789676]
This paper provides a twofold contribution to the topic.
First, we propose a Gaussian Mixture Model in conjunction with a reduction of the dimensionality of the data in each component of the model.
Second, we apply our PCA-GMM for the superresolution of 2D and 3D material images based on the approach of Sandeep and Jacob.
arXiv Detail & Related papers (2020-09-16T07:33:56Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.