On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation
- URL: http://arxiv.org/abs/2010.12408v2
- Date: Mon, 15 Feb 2021 12:23:39 GMT
- Title: On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation
- Authors: Hande Dong, Jiawei Chen, Fuli Feng, Xiangnan He, Shuxian Bi, Zhaolin
Ding, Peng Cui
- Abstract summary: Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
- Score: 60.34028546202372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The original design of Graph Convolution Network (GCN) couples feature
transformation and neighborhood aggregation for node representation learning.
Recently, some work shows that coupling is inferior to decoupling, which
supports deep graph propagation better and has become the latest paradigm of
GCN (e.g., APPNP and SGCN). Despite effectiveness, the working mechanisms of
the decoupled GCN are not well understood. In this paper, we explore the
decoupled GCN for semi-supervised node classification from a novel and
fundamental perspective -- label propagation. We conduct thorough theoretical
analyses, proving that the decoupled GCN is essentially the same as the
two-step label propagation: first, propagating the known labels along the graph
to generate pseudo-labels for the unlabeled nodes, and second, training normal
neural network classifiers on the augmented pseudo-labeled data. More
interestingly, we reveal the effectiveness of decoupled GCN: going beyond the
conventional label propagation, it could automatically assign structure- and
model- aware weights to the pseudo-label data. This explains why the decoupled
GCN is relatively robust to the structure noise and over-smoothing, but
sensitive to the label noise and model initialization. Based on this insight,
we propose a new label propagation method named Propagation then Training
Adaptively (PTA), which overcomes the flaws of the decoupled GCN with a dynamic
and adaptive weighting strategy. Our PTA is simple yet more effective and
robust than decoupled GCN. We empirically validate our findings on four
benchmark datasets, demonstrating the advantages of our method. The code is
available at https://github.com/DongHande/PT_propagation_then_training.
Related papers
- Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Label-GCN: An Effective Method for Adding Label Propagation to Graph
Convolutional Networks [1.8352113484137624]
We show that a modification of the first layer of a Graph Convolutional Network (GCN) can be used to effectively propagate label information across neighbor nodes.
This is done by selectively eliminating self-loops for the label features during the training phase of a GCN.
We show through several experiments that, depending on how many labels are available during the inference phase, this strategy can lead to a substantial improvement in the model performance.
arXiv Detail & Related papers (2021-04-05T21:02:48Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Investigating and Mitigating Degree-Related Biases in Graph
Convolutional Networks [62.8504260693664]
Graph Convolutional Networks (GCNs) show promising results for semisupervised learning tasks on graphs.
In this paper, we analyze GCNs in regard to the node degree distribution.
We develop a novel Self-Supervised DegreeSpecific GCN (SL-DSGC) that mitigates the degree biases of GCNs.
arXiv Detail & Related papers (2020-06-28T16:26:47Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.