Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks
- URL: http://arxiv.org/abs/2402.15680v1
- Date: Sat, 24 Feb 2024 01:47:56 GMT
- Title: Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks
- Authors: Qian Ma, Hongliang Chi, Hengrui Zhang, Kay Liu, Zhiwei Zhang, Lu
Cheng, Suhang Wang, Philip S. Yu, Yao Ma
- Abstract summary: We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
- Score: 60.82579717007963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of self-supervised learning, which operates without the need for
labeled data, has garnered significant interest within the graph learning
community. This enthusiasm has led to the development of numerous Graph
Contrastive Learning (GCL) techniques, all aiming to create a versatile graph
encoder that leverages the wealth of unlabeled data for various downstream
tasks. However, the current evaluation standards for GCL approaches are flawed
due to the need for extensive hyper-parameter tuning during pre-training and
the reliance on a single downstream task for assessment. These flaws can skew
the evaluation away from the intended goals, potentially leading to misleading
conclusions. In our paper, we thoroughly examine these shortcomings and offer
fresh perspectives on how GCL methods are affected by hyper-parameter choices
and the choice of downstream tasks for their evaluation. Additionally, we
introduce an enhanced evaluation framework designed to more accurately gauge
the effectiveness, consistency, and overall capability of GCL methods.
Related papers
- Towards Graph Contrastive Learning: A Survey and Beyond [23.109430624817637]
Self-supervised learning (SSL) on graphs has gained increasing attention and has made significant progress.
SSL enables machine learning models to produce informative representations from unlabeled graph data.
Graph Contrastive Learning (GCL) has not been thoroughly investigated in the existing literature.
arXiv Detail & Related papers (2024-05-20T08:19:10Z) - Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation [56.10351068286499]
We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
arXiv Detail & Related papers (2023-02-05T06:55:51Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Augmentations in Graph Contrastive Learning: Current Methodological
Flaws & Towards Better Practices [20.95255742208036]
Graph classification has applications in bioinformatics, social sciences, automated fake news detection, web document classification, and more.
Recently, contrastive learning (CL) has enabled unsupervised computer vision models to compete well against supervised ones.
Motivated by these discrepancies, we seek to determine: (i) why existing graph CL frameworks perform well despite weak augmentations and limited data; and (ii) whether adhering to visual CL principles can improve performance on graph classification tasks.
arXiv Detail & Related papers (2021-11-05T02:15:01Z) - An Empirical Study of Graph Contrastive Learning [17.246488437677616]
Graph Contrastive Learning establishes a new paradigm for learning graph representations without human annotations.
We identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques.
To foster future research and ease the implementation of GCL algorithms, we develop an easy-to-use library PyGCL, featuring modularized CL components, standardized evaluation, and experiment management.
arXiv Detail & Related papers (2021-09-02T17:43:45Z) - CogDL: A Comprehensive Library for Graph Deep Learning [55.694091294633054]
We present CogDL, a library for graph deep learning that allows researchers and practitioners to conduct experiments, compare methods, and build applications with ease and efficiency.
In CogDL, we propose a unified design for the training and evaluation of GNN models for various graph tasks, making it unique among existing graph learning libraries.
We develop efficient sparse operators for CogDL, enabling it to become the most competitive graph library for efficiency.
arXiv Detail & Related papers (2021-03-01T12:35:16Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.