Nonlinear Higher-Order Label Spreading
- URL: http://arxiv.org/abs/2006.04762v1
- Date: Mon, 8 Jun 2020 17:29:40 GMT
- Title: Nonlinear Higher-Order Label Spreading
- Authors: Francesco Tudisco, Austin R. Benson, Konstantin Prokopchik
- Abstract summary: We prove convergence of our nonlinear higher-order label spreading algorithm to the global solution of a constrained semi-supervised loss function.
We demonstrate the efficiency and efficacy of our approach on a variety of point cloud and network datasets.
- Score: 30.315829440115824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Label spreading is a general technique for semi-supervised learning with
point cloud or network data, which can be interpreted as a diffusion of labels
on a graph. While there are many variants of label spreading, nearly all of
them are linear models, where the incoming information to a node is a weighted
sum of information from neighboring nodes. Here, we add nonlinearity to label
spreading through nonlinear functions of higher-order structure in the graph,
namely triangles in the graph. For a broad class of nonlinear functions, we
prove convergence of our nonlinear higher-order label spreading algorithm to
the global solution of a constrained semi-supervised loss function. We
demonstrate the efficiency and efficacy of our approach on a variety of point
cloud and network datasets, where the nonlinear higher-order model compares
favorably to classical label spreading, as well as hypergraph models and graph
neural networks.
Related papers
- You do not have to train Graph Neural Networks at all on text-attributed graphs [25.044734252779975]
We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace.
Our experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts.
arXiv Detail & Related papers (2024-04-17T02:52:11Z) - Learning to Approximate Adaptive Kernel Convolution on Graphs [4.434835769977399]
We propose a diffusion learning framework, where the range of feature aggregation is controlled by the scale of a diffusion kernel.
Our model is tested on various standard for node-wise classification for the state-of-the-art datasets performance.
It is also validated on a real-world brain network data for graph classifications to demonstrate its practicality for Alzheimer classification.
arXiv Detail & Related papers (2024-01-22T10:57:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Towards Semi-supervised Universal Graph Classification [6.339931887475018]
We study the problem of semi-supervised universal graph classification.
This problem is challenging due to a severe lack of labels and potential class shifts.
We propose a novel graph neural network framework named UGNN, which makes the best of unlabeled data from the subgraph perspective.
arXiv Detail & Related papers (2023-05-31T06:58:34Z) - DiGress: Discrete Denoising diffusion for graph generation [79.13904438217592]
DiGress is a discrete denoising diffusion model for generating graphs with categorical node and edge attributes.
It achieves state-of-the-art performance on molecular and non-molecular datasets, with up to 3x validity improvement.
It is also the first model to scale to the large GuacaMol dataset containing 1.3M drug-like molecules.
arXiv Detail & Related papers (2022-09-29T12:55:03Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - A nonlinear diffusion method for semi-supervised learning on hypergraphs [30.41251351699783]
Hypergraph semi-supervised learning is the problem of assigning labels to all nodes in a hypergraph, given labels on just a few nodes.
We develop a nonlinear diffusion process on hypergraphs that spreads both features and labels following the hypergraph structure.
Our approach is much more accurate than several hypergraph neural networks, and also takes less time to train.
arXiv Detail & Related papers (2021-03-27T09:54:16Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Anisotropic Graph Convolutional Network for Semi-supervised Learning [7.843067454030999]
Graph convolutional networks learn effective node embeddings that have proven to be useful in achieving high-accuracy prediction results.
These networks suffer from the issue of over-smoothing and shrinking effect of the graph due in large part to the fact that they diffuse features across the edges of the graph using a linear Laplacian flow.
We propose an anisotropic graph convolutional network for semi-supervised node classification by introducing a nonlinear function that captures informative features from nodes, while preventing oversmoothing.
arXiv Detail & Related papers (2020-10-20T13:56:03Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.