LAC: Graph Contrastive Learning with Learnable Augmentation in Continuous Space
- URL: http://arxiv.org/abs/2410.15355v1
- Date: Sun, 20 Oct 2024 10:47:15 GMT
- Title: LAC: Graph Contrastive Learning with Learnable Augmentation in Continuous Space
- Authors: Zhenyu Lin, Hongzheng Li, Yingxia Shao, Guanhua Ye, Yawen Li, Quanqing Xu,
- Abstract summary: We introduce LAC, a graph contrastive learning framework with learnable data augmentation in an orthogonal continuous space.
To capture the representative information in the graph data during augmentation, we introduce a continuous view augmenter.
We propose an information-theoretic principle named InfoBal and introduce corresponding pretext tasks.
Our experimental results show that LAC significantly outperforms the state-of-the-art frameworks.
- Score: 16.26882307454389
- License:
- Abstract: Graph Contrastive Learning frameworks have demonstrated success in generating high-quality node representations. The existing research on efficient data augmentation methods and ideal pretext tasks for graph contrastive learning remains limited, resulting in suboptimal node representation in the unsupervised setting. In this paper, we introduce LAC, a graph contrastive learning framework with learnable data augmentation in an orthogonal continuous space. To capture the representative information in the graph data during augmentation, we introduce a continuous view augmenter, that applies both a masked topology augmentation module and a cross-channel feature augmentation module to adaptively augment the topological information and the feature information within an orthogonal continuous space, respectively. The orthogonal nature of continuous space ensures that the augmentation process avoids dimension collapse. To enhance the effectiveness of pretext tasks, we propose an information-theoretic principle named InfoBal and introduce corresponding pretext tasks. These tasks enable the continuous view augmenter to maintain consistency in the representative information across views while maximizing diversity between views, and allow the encoder to fully utilize the representative information in the unsupervised setting. Our experimental results show that LAC significantly outperforms the state-of-the-art frameworks.
Related papers
- Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
CoGCL aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.
We introduce a multi-level vector quantizer in an end-to-end manner to quantize user and item representations into discrete codes.
For neighborhood structure, we propose virtual neighbor augmentation by treating discrete codes as virtual neighbors.
Regarding semantic relevance, we identify similar users/items based on shared discrete codes and interaction targets to generate the semantically relevant view.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation [19.419836274690816]
We propose a new spatial-temporal graph learning model (GraphST) for enabling effective self-supervised learning.
Our proposed model is an adversarial contrastive learning paradigm that automates the distillation of crucial multi-view self-supervised information.
We demonstrate the superiority of our proposed GraphST method in various spatial-temporal prediction tasks on real-life datasets.
arXiv Detail & Related papers (2023-06-19T03:09:35Z) - Joint Data and Feature Augmentation for Self-Supervised Representation
Learning on Point Clouds [4.723757543677507]
We propose a fusion contrastive learning framework to combine data augmentations in Euclidean space and feature augmentations in feature space.
We conduct extensive object classification experiments and object part segmentation experiments to validate the transferability of the proposed framework.
Experimental results demonstrate that the proposed framework is effective to learn the point cloud representation in a self-supervised manner.
arXiv Detail & Related papers (2022-11-02T14:58:03Z) - Adversarial Cross-View Disentangled Graph Contrastive Learning [30.97720522293301]
We introduce ACDGCL, which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data.
We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.
arXiv Detail & Related papers (2022-09-16T03:48:39Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.