Pseudo Contrastive Learning for Graph-based Semi-supervised Learning
- URL: http://arxiv.org/abs/2302.09532v3
- Date: Tue, 19 Dec 2023 03:35:13 GMT
- Title: Pseudo Contrastive Learning for Graph-based Semi-supervised Learning
- Authors: Weigang Lu, Ziyu Guan, Wei Zhao, Yaming Yang, Yuanhai Lv, Lining Xing,
Baosheng Yu, Dacheng Tao
- Abstract summary: Pseudo Labeling is a technique used to improve the performance of Graph Neural Networks (GNNs)
We propose a general framework for GNNs, termed Pseudo Contrastive Learning (PCL)
- Score: 67.37572762925836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pseudo Labeling is a technique used to improve the performance of
semi-supervised Graph Neural Networks (GNNs) by generating additional
pseudo-labels based on confident predictions. However, the quality of generated
pseudo-labels has been a longstanding concern due to the sensitivity of the
classification objective with respect to the given labels. To avoid the
untrustworthy classification supervision indicating ``a node belongs to a
specific class,'' we favor the fault-tolerant contrasting supervision
demonstrating ``two nodes do not belong to the same class.'' Thus, the problem
of generating high-quality pseudo-labels is then transformed into a relaxed
version, i.e., identifying reliable negative pairs. To achieve this, we propose
a general framework for GNNs, termed Pseudo Contrastive Learning (PCL). It
separates two nodes whose positive and negative pseudo-labels target the same
class. To incorporate topological knowledge into learning, we devise a
topologically weighted contrastive loss that spends more effort separating
negative pairs with smaller topological distances. Experimentally, we apply PCL
to various GNNs, which consistently outperform their counterparts using other
popular general techniques on five real-world graphs.
Related papers
- NP$^2$L: Negative Pseudo Partial Labels Extraction for Graph Neural
Networks [48.39834063008816]
Pseudo labels are used in graph neural networks (GNNs) to assist learning at the message-passing level.
In this paper, we introduce a new method to use pseudo labels in GNNs.
We show that our method is more accurate if they are selected by not overlapping partial labels and defined as negative node pairs relations.
arXiv Detail & Related papers (2023-10-02T11:13:59Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Robust Node Classification on Graphs: Jointly from Bayesian Label
Transition and Topology-based Label Propagation [5.037076816350975]
In recent years, evidence emerges that the performance of GNN-based node classification may deteriorate substantially by topological perturbations.
We propose a new label inference model, namely LInDT, which integrates both Bayesian label transition and topology-based label propagation.
Experiments on five graph datasets demonstrate the superiority of LInDT for GNN-based node classification under three scenarios of topological perturbations.
arXiv Detail & Related papers (2022-08-21T01:56:25Z) - TAM: Topology-Aware Margin Loss for Class-Imbalanced Node Classification [33.028354930416754]
We propose Topology-Aware Margin (TAM) to reflect local topology on the learning objective.
Our method consistently exhibits superiority over the baselines on various node classification benchmark datasets.
arXiv Detail & Related papers (2022-06-26T16:29:36Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Simplifying Node Classification on Heterophilous Graphs with Compatible
Label Propagation [6.071760028190454]
We show that a well-known graph algorithm, Label Propagation, combined with a shallow neural network can achieve comparable performance to GNNs in semi-supervised node classification on graphs with high homophily.
In this paper, we show that this approach falls short on graphs with low homophily, where nodes often connect to the nodes of the opposite classes.
Our algorithm first learns the class compatibility matrix and then aggregates label predictions using LP algorithm weighted by class compatibilities.
arXiv Detail & Related papers (2022-05-19T08:34:34Z) - Informative Pseudo-Labeling for Graph Neural Networks with Few Labels [12.83841767562179]
Graph Neural Networks (GNNs) have achieved state-of-the-art results for semi-supervised node classification on graphs.
The challenge of how to effectively learn GNNs with very few labels is still under-explored.
We propose a novel informative pseudo-labeling framework, called InfoGNN, to facilitate learning of GNNs with extremely few labels.
arXiv Detail & Related papers (2022-01-20T01:49:30Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Graph Inference Learning for Semi-supervised Classification [50.55765399527556]
We propose a Graph Inference Learning framework to boost the performance of semi-supervised node classification.
For learning the inference process, we introduce meta-optimization on structure relations from training nodes to validation nodes.
Comprehensive evaluations on four benchmark datasets demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods.
arXiv Detail & Related papers (2020-01-17T02:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.