Let Invariant Rationale Discovery Inspire Graph Contrastive Learning
- URL: http://arxiv.org/abs/2206.07869v1
- Date: Thu, 16 Jun 2022 01:28:40 GMT
- Title: Let Invariant Rationale Discovery Inspire Graph Contrastive Learning
- Authors: Sihang Li, Xiang Wang, An zhang, Yingxin Wu, Xiangnan He and Tat-Seng
Chua
- Abstract summary: We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
- Score: 98.10268114789775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leading graph contrastive learning (GCL) methods perform graph augmentations
in two fashions: (1) randomly corrupting the anchor graph, which could cause
the loss of semantic information, or (2) using domain knowledge to maintain
salient features, which undermines the generalization to other domains. Taking
an invariance look at GCL, we argue that a high-performing augmentation should
preserve the salient semantics of anchor graphs regarding
instance-discrimination. To this end, we relate GCL with invariant rationale
discovery, and propose a new framework, Rationale-aware Graph Contrastive
Learning (RGCL). Specifically, without supervision signals, RGCL uses a
rationale generator to reveal salient features about graph
instance-discrimination as the rationale, and then creates rationale-aware
views for contrastive learning. This rationale-aware pre-training scheme endows
the backbone model with the powerful representation ability, further
facilitating the fine-tuning on downstream tasks. On MNIST-Superpixel and MUTAG
datasets, visual inspections on the discovered rationales showcase that the
rationale generator successfully captures the salient features (i.e.
distinguishing semantic nodes in graphs). On biochemical molecule and social
network benchmark datasets, the state-of-the-art performance of RGCL
demonstrates the effectiveness of rationale-aware views for contrastive
learning. Our codes are available at https://github.com/lsh0520/RGCL.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Graph Contrastive Learning with Cohesive Subgraph Awareness [34.76555185419192]
Graph contrastive learning (GCL) has emerged as a state-of-the-art strategy for learning representations of diverse graphs.
We argue that an awareness of subgraphs during the graph augmentation and learning processes has the potential to enhance GCL performance.
We propose a novel unified framework called CTAug, to seamlessly integrate cohesion awareness into various existing GCL mechanisms.
arXiv Detail & Related papers (2024-01-31T03:51:30Z) - A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks [35.35462459134551]
SpikeGCL is a novel framework to learn binarized 1-bit representations for graphs.
We provide theoretical guarantees to demonstrate that SpikeGCL has comparable with its full-precision counterparts.
arXiv Detail & Related papers (2023-05-30T16:03:11Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Augmentation-Free Self-Supervised Learning on Graphs [7.146027549101716]
We propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL.
Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph.
arXiv Detail & Related papers (2021-12-05T04:20:44Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.