Learning Graph Laplacian with MCP
- URL: http://arxiv.org/abs/2010.11559v2
- Date: Thu, 5 Oct 2023 08:56:20 GMT
- Title: Learning Graph Laplacian with MCP
- Authors: Yangjing Zhang, Kim-Chuan Toh, Defeng Sun
- Abstract summary: We consider the problem of learning a graph under the semilacian with a non-smooth penalty constraint.
We apply an efficient Newton method to subproblems of the DCA.
- Score: 8.409980020848167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning a graph under the Laplacian constraint
with a non-convex penalty: minimax concave penalty (MCP). For solving the MCP
penalized graphical model, we design an inexact proximal difference-of-convex
algorithm (DCA) and prove its convergence to critical points. We note that each
subproblem of the proximal DCA enjoys the nice property that the objective
function in its dual problem is continuously differentiable with a semismooth
gradient. Therefore, we apply an efficient semismooth Newton method to
subproblems of the proximal DCA. Numerical experiments on various synthetic and
real data sets demonstrate the effectiveness of the non-convex penalty MCP in
promoting sparsity. Compared with the existing state-of-the-art method, our
method is demonstrated to be more efficient and reliable for learning graph
Laplacian with MCP.
Related papers
- Learning Multi-Attribute Differential Graphs with Non-Convex Penalties [12.94486861344922]
Two multi-attribute Gaussian graphical models (GGMs) are known to have similar D-trace loss function with non-dimensional penalties.<n>We consider the problem of estimating differences in two multi-attribute graphical models (GGMs) which are known to have similar D-trace loss function with non-dimensional penalties.
arXiv Detail & Related papers (2025-05-14T19:19:09Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.
Non-smooth regularization is often incorporated into machine learning tasks.
We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - CLAP: Concave Linear APproximation for Quadratic Graph Matching [5.417323487240968]
We introduce a linear model and designed a novel approximation matrix for graph matching.
We then transform the original QAP into a linear model that is concave for the structural constraint.
This model can be solved using the Sinkhorn optimal transport algorithm.
arXiv Detail & Related papers (2024-10-22T15:28:18Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Deep Graph Matching under Quadratic Constraint [30.72996618021077]
We propose to explicitly formulate pairwise graph structures as a textbfquadratic constraint incorporated into the deep graph matching framework.
The quadratic constraint minimizes the pairwise structural discrepancy between graphs, which can reduce the ambiguities brought by only using the extracted CNN features.
To give more precise and proper supervision, a well-designed false matching loss against class imbalance is proposed, which can better penalize the false negatives and false positives with less overfitting.
arXiv Detail & Related papers (2021-03-11T12:51:12Z) - Non-approximate Inference for Collective Graphical Models on Path Graphs
via Discrete Difference of Convex Algorithm [19.987509826212115]
This paper proposes a new method for MAP inference for Collective Graphical Model (CGM) on path graphs.
First we show that the MAP inference problem can be formulated as a (non-linear) minimum cost flow problem.
In our algorithm, important subroutines in DCA can be efficiently calculated by minimum convex cost flow algorithms.
arXiv Detail & Related papers (2021-02-18T07:28:18Z) - Federated Deep AUC Maximization for Heterogeneous Data with a Constant
Communication Complexity [77.78624443410216]
We propose improved FDAM algorithms for detecting heterogeneous chest data.
A result of this paper is that the communication of the proposed algorithm is strongly independent of the number of machines and also independent of the accuracy level.
Experiments have demonstrated the effectiveness of our FDAM algorithm on benchmark datasets and on medical chest Xray images from different organizations.
arXiv Detail & Related papers (2021-02-09T04:05:19Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.