Masked Label Prediction: Unified Message Passing Model for
Semi-Supervised Classification
- URL: http://arxiv.org/abs/2009.03509v5
- Date: Mon, 10 May 2021 02:23:20 GMT
- Title: Masked Label Prediction: Unified Message Passing Model for
Semi-Supervised Classification
- Authors: Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, Yu
Sun
- Abstract summary: We propose a novel Unified Message Passaging Model (UniMP) that can incorporate feature and label propagation at both training and inference time.
UniMP conceptually unifies feature propagation and label propagation and is empirically powerful.
It obtains new state-of-the-art semi-supervised classification results in Open Graph Benchmark (OGB)
- Score: 25.064700425166176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural network (GNN) and label propagation algorithm (LPA) are both
message passing algorithms, which have achieved superior performance in
semi-supervised classification. GNN performs feature propagation by a neural
network to make predictions, while LPA uses label propagation across graph
adjacency matrix to get results. However, there is still no effective way to
directly combine these two kinds of algorithms. To address this issue, we
propose a novel Unified Message Passaging Model (UniMP) that can incorporate
feature and label propagation at both training and inference time. First, UniMP
adopts a Graph Transformer network, taking feature embedding and label
embedding as input information for propagation. Second, to train the network
without overfitting in self-loop input label information, UniMP introduces a
masked label prediction strategy, in which some percentage of input label
information are masked at random, and then predicted. UniMP conceptually
unifies feature propagation and label propagation and is empirically powerful.
It obtains new state-of-the-art semi-supervised classification results in Open
Graph Benchmark (OGB).
Related papers
- Echoless Label-Based Pre-computation for Memory-Efficient Heterogeneous Graph Learning [70.73716819236627]
Heterogeneous Graph Neural Networks (HGNNs) are widely used for deep learning on heterogeneous graphs.<n>HGNNs require repetitive message passing during training, limiting efficiency for large-scale real-world graphs.<n>We propose pre-computation-based HGNNs that only perform message passing once during preprocessing.
arXiv Detail & Related papers (2025-11-14T08:53:39Z) - LOSS-GAT: Label Propagation and One-Class Semi-Supervised Graph
Attention Network for Fake News Detection [2.6396287656676725]
Loss-GAT is a semi-supervised and one-class approach for fake news detection.
We employ a two-step label propagation algorithm to categorize news into two groups: interest (fake) and non-interest (real)
arXiv Detail & Related papers (2024-02-13T12:02:37Z) - Domain-adaptive Message Passing Graph Neural Network [67.35534058138387]
Cross-network node classification (CNNC) aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels.
We propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-31T05:26:08Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Mixed Graph Contrastive Network for Semi-Supervised Node Classification [63.924129159538076]
We propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN)
In our method, we improve the discriminative capability of the latent embeddings by an unperturbed augmentation strategy and a correlation reduction mechanism.
By combining the two settings, we extract rich supervision information from both the abundant nodes and the rare yet valuable labeled nodes for discriminative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Why Propagate Alone? Parallel Use of Labels and Features on Graphs [42.01561812621306]
Graph neural networks (GNNs) and label propagation represent two interrelated modeling strategies designed to exploit graph structure in tasks such as node property prediction.
We show that a label trick can be reduced to an interpretable, deterministic training objective composed of two factors.
arXiv Detail & Related papers (2021-10-14T07:34:11Z) - Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision [56.950950382415925]
We propose a novel consistency regularization approach, called cross pseudo supervision (CPS)
The CPS consistency has two roles: encourage high similarity between the predictions of two perturbed networks for the same input image, and expand training data by using the unlabeled data with pseudo labels.
Experiment results show that our approach achieves the state-of-the-art semi-supervised segmentation performance on Cityscapes and PASCAL VOC 2012.
arXiv Detail & Related papers (2021-06-02T15:21:56Z) - Group-aware Label Transfer for Domain Adaptive Person Re-identification [179.816105255584]
Unsupervised Adaptive Domain (UDA) person re-identification (ReID) aims at adapting the model trained on a labeled source-domain dataset to a target-domain dataset without any further annotations.
Most successful UDA-ReID approaches combine clustering-based pseudo-label prediction with representation learning and perform the two steps in an alternating fashion.
We propose a Group-aware Label Transfer (GLT) algorithm, which enables the online interaction and mutual promotion of pseudo-label prediction and representation learning.
arXiv Detail & Related papers (2021-03-23T07:57:39Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.