Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields
- URL: http://arxiv.org/abs/2109.08666v1
- Date: Fri, 17 Sep 2021 17:46:12 GMT
- Title: Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields
- Authors: Tatsuya Koyakumaru, Masahiro Yukawa, Eduardo Pavez, and Antonio Ortega
- Abstract summary: This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
- Score: 51.07460861448716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a convex-analytic framework to learn sparse graphs from
data. While our problem formulation is inspired by an extension of the
graphical lasso using the so-called combinatorial graph Laplacian framework, a
key difference is the use of a nonconvex alternative to the $\ell_1$ norm to
attain graphs with better interpretability. Specifically, we use the
weakly-convex minimax concave penalty (the difference between the $\ell_1$ norm
and the Huber function) which is known to yield sparse solutions with lower
estimation bias than $\ell_1$ for regression problems. In our framework, the
graph Laplacian is replaced in the optimization by a linear transform of the
vector corresponding to its upper triangular part. Via a reformulation relying
on Moreau's decomposition, we show that overall convexity is guaranteed by
introducing a quadratic function to our cost function. The problem can be
solved efficiently by the primal-dual splitting method, of which the admissible
conditions for provable convergence are presented. Numerical examples show that
the proposed method significantly outperforms the existing graph learning
methods with reasonable CPU time.
Related papers
- CLAP: Concave Linear APproximation for Quadratic Graph Matching [5.417323487240968]
We introduce a linear model and designed a novel approximation matrix for graph matching.
We then transform the original QAP into a linear model that is concave for the structural constraint.
This model can be solved using the Sinkhorn optimal transport algorithm.
arXiv Detail & Related papers (2024-10-22T15:28:18Z) - Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Bayesian Inference of Random Dot Product Graphs via Conic Programming [1.7188280334580195]
We present a convex cone program to infer the latent probability matrix of a random dot product graph (RDPG)
We also demonstrate that the method recovers latent structure in the Karate Club Graph and synthetic U.S. Senate vote graphs.
arXiv Detail & Related papers (2021-01-06T18:29:37Z) - Multiway $p$-spectral graph cuts on Grassmann manifolds [0.3441021278275805]
We present a novel direct multiway spectral clustering algorithm in the $p$-norm, for $p in (1, 2]$.
The problem of computing multiple eigenvectors of the graph $p$-Laplacian is recasted as an unconstrained minimization problem on a Grassmann manifold.
Monitoring the monotonic decrease of the balanced graph cuts guarantees that we obtain the best available solution from the $p$-levels considered.
arXiv Detail & Related papers (2020-08-30T16:25:04Z) - Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models? [13.572602792770288]
We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
arXiv Detail & Related papers (2020-06-26T12:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.