Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by
Coordinate Descent
- URL: http://arxiv.org/abs/2301.12184v2
- Date: Sun, 24 Sep 2023 11:47:14 GMT
- Title: Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by
Coordinate Descent
- Authors: Sara Venturini, Andrea Cristofari, Francesco Rinaldi, Francesco
Tudisco
- Abstract summary: Graph Semi-Supervised learning is an important data analysis tool.
In this paper, we consider an optimization-based formulation of the problem for an undirected graph.
We solve the problem using different coordinate descent approaches and compare the results with the ones obtained by the classic gradient descent method.
- Score: 4.56754610152086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Semi-Supervised learning is an important data analysis tool, where
given a graph and a set of labeled nodes, the aim is to infer the labels to the
remaining unlabeled nodes. In this paper, we start by considering an
optimization-based formulation of the problem for an undirected graph, and then
we extend this formulation to multilayer hypergraphs. We solve the problem
using different coordinate descent approaches and compare the results with the
ones obtained by the classic gradient descent method. Experiments on synthetic
and real-world datasets show the potential of using coordinate descent methods
with suitable selection rules.
Related papers
- Nonlinear Correct and Smooth for Semi-Supervised Learning [1.622641093702668]
Graph-based semi-supervised learning (GSSL) has been used successfully in various applications.
We propose Correct and Smooth (NLCS), which improves the existing post-processing approach by incorporating non-linearity and higher-order representation.
arXiv Detail & Related papers (2023-10-09T14:33:32Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Joint Graph and Vertex Importance Learning [47.249968772606145]
We propose a novel method to learn a graph with smaller edge weight upper bounds compared to Laplacian approaches.
Experimentally, our approach yields much sparser graphs compared to a Laplacian approach, with a more interpretable model.
arXiv Detail & Related papers (2023-03-15T12:12:13Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure
Preservation [27.215800308343322]
We present the first Mixup-like graph augmentation method at the graph-level called Graph Transplant.
Our method identifies the sub-structure as a mix unit that can preserve the local information.
We extensively validate our method with diverse GNN architectures on multiple graph classification benchmark datasets.
arXiv Detail & Related papers (2021-11-10T11:10:13Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.