Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification
- URL: http://arxiv.org/abs/2205.15653v1
- Date: Tue, 31 May 2022 09:48:47 GMT
- Title: Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification
- Authors: Le Yu, Leilei Sun, Bowen Du, Tongyu Zhu, Weifeng Lv
- Abstract summary: We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
- Score: 32.64730237473914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been widely applied in the semi-supervised
node classification task, where a key point lies in how to sufficiently
leverage the limited but valuable label information. Most of the classical GNNs
solely use the known labels for computing the classification loss at the
output. In recent years, several methods have been designed to additionally
utilize the labels at the input. One part of the methods augment the node
features via concatenating or adding them with the one-hot encodings of labels,
while other methods optimize the graph structure by assuming neighboring nodes
tend to have the same label. To bring into full play the rich information of
labels, in this paper, we present a label-enhanced learning framework for GNNs,
which first models each label as a virtual center for intra-class nodes and
then jointly learns the representations of both nodes and labels. Our approach
could not only smooth the representations of nodes belonging to the same class,
but also explicitly encode the label semantics into the learning process of
GNNs. Moreover, a training node selection technique is provided to eliminate
the potential label leakage issue and guarantee the model generalization
ability. Finally, an adaptive self-training strategy is proposed to iteratively
enlarge the training set with more reliable pseudo labels and distinguish the
importance of each pseudo-labeled node during the model training process.
Experimental results on both real-world and synthetic datasets demonstrate our
approach can not only consistently outperform the state-of-the-arts, but also
effectively smooth the representations of intra-class nodes.
Related papers
- KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - Contrastive Meta-Learning for Few-shot Node Classification [54.36506013228169]
Few-shot node classification aims to predict labels for nodes on graphs with only limited labeled nodes as references.
We create a novel contrastive meta-learning framework on graphs, named COSMIC, with two key designs.
arXiv Detail & Related papers (2023-06-27T02:22:45Z) - Pseudo Contrastive Learning for Graph-based Semi-supervised Learning [67.37572762925836]
Pseudo Labeling is a technique used to improve the performance of Graph Neural Networks (GNNs)
We propose a general framework for GNNs, termed Pseudo Contrastive Learning (PCL)
arXiv Detail & Related papers (2023-02-19T10:34:08Z) - Informative Pseudo-Labeling for Graph Neural Networks with Few Labels [12.83841767562179]
Graph Neural Networks (GNNs) have achieved state-of-the-art results for semi-supervised node classification on graphs.
The challenge of how to effectively learn GNNs with very few labels is still under-explored.
We propose a novel informative pseudo-labeling framework, called InfoGNN, to facilitate learning of GNNs with extremely few labels.
arXiv Detail & Related papers (2022-01-20T01:49:30Z) - Structure-Aware Label Smoothing for Graph Neural Networks [39.97741949184259]
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
We propose a novel SALS (textitStructure-Aware Label Smoothing) method as an enhancement component to popular node classification models.
arXiv Detail & Related papers (2021-12-01T13:48:58Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z) - Multi-Label Graph Convolutional Network Representation Learning [20.059242373860013]
We propose a novel multi-label graph convolutional network (ML-GCN) for learning node representation for multi-label networks.
The two GCNs each handle one aspect of representation learning for nodes and labels, respectively, and they are seamlessly integrated under one objective function.
arXiv Detail & Related papers (2019-12-26T02:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.