Label Contrastive Coding based Graph Neural Network for Graph
Classification
- URL: http://arxiv.org/abs/2101.05486v1
- Date: Thu, 14 Jan 2021 07:45:55 GMT
- Title: Label Contrastive Coding based Graph Neural Network for Graph
Classification
- Authors: Yuxiang Ren, Jiyang Bai, and Jiawei Zhang
- Abstract summary: We propose the novel Label Contrastive Coding based Graph Neural Network (LCGNN) to utilize label information more effectively and comprehensively.
To power the contrastive learning, LCGNN introduces a dynamic label memory bank and a momentum updated encoder.
Our evaluations with eight benchmark graph datasets demonstrate that LCGNN can outperform state-of-the-art graph classification models.
- Score: 9.80278570179994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph classification is a critical research problem in many applications from
different domains. In order to learn a graph classification model, the most
widely used supervision component is an output layer together with
classification loss (e.g.,cross-entropy loss together with softmax or margin
loss). In fact, the discriminative information among instances are more
fine-grained, which can benefit graph classification tasks. In this paper, we
propose the novel Label Contrastive Coding based Graph Neural Network (LCGNN)
to utilize label information more effectively and comprehensively. LCGNN still
uses the classification loss to ensure the discriminability of classes.
Meanwhile, LCGNN leverages the proposed Label Contrastive Loss derived from
self-supervised learning to encourage instance-level intra-class compactness
and inter-class separability. To power the contrastive learning, LCGNN
introduces a dynamic label memory bank and a momentum updated encoder. Our
extensive evaluations with eight benchmark graph datasets demonstrate that
LCGNN can outperform state-of-the-art graph classification models. Experimental
results also verify that LCGNN can achieve competitive performance with less
training data because LCGNN exploits label information comprehensively.
Related papers
- Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Class-Balanced and Reinforced Active Learning on Graphs [13.239043161351482]
Graph neural networks (GNNs) have demonstrated significant success in various applications, such as node classification, link prediction, and graph classification.
Active learning for GNNs aims to query the valuable samples from the unlabeled data for annotation to maximize the GNNs' performance at a lower cost.
Most existing algorithms for reinforced active learning in GNNs may lead to a highly imbalanced class distribution, especially in highly skewed class scenarios.
We propose a novel class-balanced and reinforced active learning framework for GNNs, namely, GCBR. It learns an optimal policy to acquire class-balanced and informative nodes
arXiv Detail & Related papers (2024-02-15T16:37:14Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - A Class-Aware Representation Refinement Framework for Graph Classification [8.998543739618077]
We propose a Class-Aware Representation rEfinement (CARE) framework for the task of graph classification.
CARE computes simple yet powerful class representations and injects them to steer the learning of graph representations towards better class separability.
Our experiments with 11 well-known GNN backbones on 9 benchmark datasets validate the superiority and effectiveness of CARE over its GNN counterparts.
arXiv Detail & Related papers (2022-09-02T10:18:33Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Few-Shot Learning on Graphs via Super-Classes based on Graph Spectral
Measures [14.932318540666545]
We study the problem of few shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples.
We propose an approach where a probability measure is assigned to each graph based on the spectrum of the graphs normalized Laplacian.
arXiv Detail & Related papers (2020-02-27T17:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.