Structure-Aware Label Smoothing for Graph Neural Networks
- URL: http://arxiv.org/abs/2112.00499v1
- Date: Wed, 1 Dec 2021 13:48:58 GMT
- Title: Structure-Aware Label Smoothing for Graph Neural Networks
- Authors: Yiwei Wang, Yujun Cai, Yuxuan Liang, Wei Wang, Henghui Ding, Muhao
Chen, Jing Tang, Bryan Hooi
- Abstract summary: Representing a label distribution as a one-hot vector is a common practice in training node classification models.
We propose a novel SALS (textitStructure-Aware Label Smoothing) method as an enhancement component to popular node classification models.
- Score: 39.97741949184259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representing a label distribution as a one-hot vector is a common practice in
training node classification models. However, the one-hot representation may
not adequately reflect the semantic characteristics of a node in different
classes, as some nodes may be semantically close to their neighbors in other
classes. It would cause over-confidence since the models are encouraged to
assign full probabilities when classifying every node. While training models
with label smoothing can ease this problem to some degree, it still fails to
capture the nodes' semantic characteristics implied by the graph structures. In
this work, we propose a novel SALS (\textit{Structure-Aware Label Smoothing})
method as an enhancement component to popular node classification models. SALS
leverages the graph structures to capture the semantic correlations between the
connected nodes and generate the structure-aware label distribution to replace
the original one-hot label vectors, thus improving the node classification
performance without inference costs. Extensive experiments on seven node
classification benchmark datasets reveal the effectiveness of our SALS on
improving both transductive and inductive node classification. Empirical
results show that SALS is superior to the label smoothing method and enhances
the node classification models to outperform the baseline methods.
Related papers
- Posterior Label Smoothing for Node Classification [2.737276507021477]
We propose a simple yet effective label smoothing for the transductive node classification task.
We design the soft label to encapsulate the local context of the target node through the neighborhood label distribution.
In the following analysis, we find that incorporating global label statistics in posterior computation is the key to the success of label smoothing.
arXiv Detail & Related papers (2024-06-01T11:59:49Z) - Contrastive Meta-Learning for Few-shot Node Classification [54.36506013228169]
Few-shot node classification aims to predict labels for nodes on graphs with only limited labeled nodes as references.
We create a novel contrastive meta-learning framework on graphs, named COSMIC, with two key designs.
arXiv Detail & Related papers (2023-06-27T02:22:45Z) - Leveraging Label Non-Uniformity for Node Classification in Graph Neural
Networks [33.84217145105558]
In node classification using graph neural networks (GNNs), a typical model generates logits for different class labels at each node.
We introduce the key notion of label non-uniformity, which is derived from the Wasserstein distance between the softmax distribution of the logits and the uniform distribution.
We theoretically analyze how the label non-uniformity varies across the graph, which provides insights into boosting the model performance.
arXiv Detail & Related papers (2023-04-29T01:09:56Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced
training [1.2183405753834562]
It is hard to directly implement Graph Neural Networks (GNNs) on large scaled graphs.
We propose scalable and Adaptive Graph Neural Networks (SAGN)
We propose Self-Label-Enhance (SLE) framework combining self-training approach and label propagation in depth.
arXiv Detail & Related papers (2021-04-19T15:08:06Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.