Label-GCN: An Effective Method for Adding Label Propagation to Graph
Convolutional Networks
- URL: http://arxiv.org/abs/2104.02153v1
- Date: Mon, 5 Apr 2021 21:02:48 GMT
- Title: Label-GCN: An Effective Method for Adding Label Propagation to Graph
Convolutional Networks
- Authors: Claudio Bellei, Hussain Alattas, and Nesrine Kaaniche
- Abstract summary: We show that a modification of the first layer of a Graph Convolutional Network (GCN) can be used to effectively propagate label information across neighbor nodes.
This is done by selectively eliminating self-loops for the label features during the training phase of a GCN.
We show through several experiments that, depending on how many labels are available during the inference phase, this strategy can lead to a substantial improvement in the model performance.
- Score: 1.8352113484137624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that a modification of the first layer of a Graph Convolutional
Network (GCN) can be used to effectively propagate label information across
neighbor nodes, for binary and multi-class classification problems. This is
done by selectively eliminating self-loops for the label features during the
training phase of a GCN. The GCN architecture is otherwise unchanged, without
any extra hyper-parameters, and can be used in both a transductive and
inductive setting. We show through several experiments that, depending on how
many labels are available during the inference phase, this strategy can lead to
a substantial improvement in the model performance compared to a standard GCN
approach, including with imbalanced datasets.
Related papers
- ELU-GCN: Effectively Label-Utilizing Graph Convolutional Network [17.273475235903355]
We propose a new two-step framework called ELU-GCN.
In the first stage, ELU-GCN conducts graph learning to learn a new graph structure.
In the second stage, we design a new graph contrastive learning on the GCN framework for representation learning.
arXiv Detail & Related papers (2024-11-04T17:08:59Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - Robust Node Classification on Graphs: Jointly from Bayesian Label
Transition and Topology-based Label Propagation [5.037076816350975]
In recent years, evidence emerges that the performance of GNN-based node classification may deteriorate substantially by topological perturbations.
We propose a new label inference model, namely LInDT, which integrates both Bayesian label transition and topology-based label propagation.
Experiments on five graph datasets demonstrate the superiority of LInDT for GNN-based node classification under three scenarios of topological perturbations.
arXiv Detail & Related papers (2022-08-21T01:56:25Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Deperturbation of Online Social Networks via Bayesian Label Transition [5.037076816350975]
Online social networks (OSNs) classify users into different categories based on their online activities and interests.
A small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task.
We develop a GCN defense model, namely GraphLT, which uses the concept of label transition.
arXiv Detail & Related papers (2020-10-27T08:15:12Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.