Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models?
- URL: http://arxiv.org/abs/2006.14925v2
- Date: Tue, 5 Sep 2023 17:14:54 GMT
- Title: Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models?
- Authors: Jiaxi Ying, Jos\'e Vin\'icius de M. Cardoso, Daniel P. Palomar
- Abstract summary: We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
- Score: 13.572602792770288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning a sparse graph under the Laplacian
constrained Gaussian graphical models. This problem can be formulated as a
penalized maximum likelihood estimation of the Laplacian constrained precision
matrix. Like in the classical graphical lasso problem, recent works made use of
the $\ell_1$-norm regularization with the goal of promoting sparsity in
Laplacian constrained precision matrix estimation. However, we find that the
widely used $\ell_1$-norm is not effective in imposing a sparse solution in
this problem. Through empirical evidence, we observe that the number of nonzero
graph weights grows with the increase of the regularization parameter. From a
theoretical perspective, we prove that a large regularization parameter will
surprisingly lead to a complete graph, i.e., every pair of vertices is
connected by an edge. To address this issue, we introduce the nonconvex
sparsity penalty, and propose a new estimator by solving a sequence of weighted
$\ell_1$-norm penalized sub-problems. We establish the non-asymptotic
optimization performance guarantees on both optimization error and statistical
error, and prove that the proposed estimator can recover the edges correctly
with a high probability. To solve each sub-problem, we develop a projected
gradient descent algorithm which enjoys a linear convergence rate. Finally, an
extension to learn disconnected graphs is proposed by imposing additional rank
constraint. We propose a numerical algorithm based on based on the alternating
direction method of multipliers, and establish its theoretical sequence
convergence. Numerical experiments involving synthetic and real-world data sets
demonstrate the effectiveness of the proposed method.
Related papers
- Network Topology Inference with Sparsity and Laplacian Constraints [18.447094648361453]
We tackle the network topology by utilizing Laplacian constrained Gaussian graphical models.
We show that an efficient projection algorithm is developed to solve the resulting problem.
arXiv Detail & Related papers (2023-09-02T15:06:30Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - Sparse Graph Learning Under Laplacian-Related Constraints [23.503820266772504]
We focus on graph Laplacian-related constraints on the sparse precision matrix that encodes conditional dependence between random variables.
We investigate modifications to widely used penalized log-likelihood approaches to enforce total positivity but not the Laplacian structure.
An alternating direction method of multipliers (ADMM) algorithm is presented and analyzed for constrained optimization.
arXiv Detail & Related papers (2021-11-16T00:50:36Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Bayesian Inference of Random Dot Product Graphs via Conic Programming [1.7188280334580195]
We present a convex cone program to infer the latent probability matrix of a random dot product graph (RDPG)
We also demonstrate that the method recovers latent structure in the Karate Club Graph and synthetic U.S. Senate vote graphs.
arXiv Detail & Related papers (2021-01-06T18:29:37Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.