Graph Contrastive Invariant Learning from the Causal Perspective
- URL: http://arxiv.org/abs/2401.12564v2
- Date: Thu, 7 Mar 2024 06:00:13 GMT
- Title: Graph Contrastive Invariant Learning from the Causal Perspective
- Authors: Yanhu Mo, Xiao Wang, Shaohua Fan, Chuan Shi
- Abstract summary: Graph contrastive learning (GCL) learning the node representation by contrasting two augmented graphs in a self-supervised way has attracted considerable attention.
In this paper, we first study GCL from the perspective of causality.
By analyzing GCL with the structural causal model (SCM), we discover that traditional GCL may not well learn the invariant representations due to the non-causal information contained in the graph.
- Score: 41.62549508842019
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph contrastive learning (GCL), learning the node representation by
contrasting two augmented graphs in a self-supervised way, has attracted
considerable attention. GCL is usually believed to learn the invariant
representation. However, does this understanding always hold in practice? In
this paper, we first study GCL from the perspective of causality. By analyzing
GCL with the structural causal model (SCM), we discover that traditional GCL
may not well learn the invariant representations due to the non-causal
information contained in the graph. How can we fix it and encourage the current
GCL to learn better invariant representations? The SCM offers two requirements
and motives us to propose a novel GCL method. Particularly, we introduce the
spectral graph augmentation to simulate the intervention upon non-causal
factors. Then we design the invariance objective and independence objective to
better capture the causal factors. Specifically, (i) the invariance objective
encourages the encoder to capture the invariant information contained in causal
variables, and (ii) the independence objective aims to reduce the influence of
confounders on the causal variables. Experimental results demonstrate the
effectiveness of our approach on node classification tasks.
Related papers
- Rethinking and Simplifying Bootstrapped Graph Latents [48.76934123429186]
Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning.
We present SGCL, a simple yet effective GCL framework that utilizes the outputs from two consecutive iterations as positive pairs.
We show that SGCL can achieve competitive performance with fewer parameters, lower time and space costs, and significant convergence speedup.
arXiv Detail & Related papers (2023-12-05T09:49:50Z) - Architecture Matters: Uncovering Implicit Mechanisms in Graph
Contrastive Learning [34.566003077992384]
We present a systematic study of various graph contrastive learning (GCL) methods.
By uncovering how the implicit inductive bias of GNNs works in contrastive learning, we theoretically provide insights into the above intriguing properties of GCL.
Rather than directly porting existing NN methods to GCL, we advocate for more attention toward the unique architecture of graph learning.
arXiv Detail & Related papers (2023-11-05T15:54:17Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Uncovering the Structural Fairness in Graph Contrastive Learning [87.65091052291544]
Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
arXiv Detail & Related papers (2022-10-06T15:58:25Z) - Revisiting Graph Contrastive Learning from the Perspective of Graph
Spectrum [91.06367395889514]
Graph Contrastive Learning (GCL) learning the node representations by augmenting graphs has attracted considerable attentions.
We answer these questions by establishing the connection between GCL and graph spectrum.
We propose a spectral graph contrastive learning module (SpCo), which is a general and GCL-friendly plug-in.
arXiv Detail & Related papers (2022-10-05T15:32:00Z) - Spectral Augmentation for Self-Supervised Learning on Graphs [43.19199994575821]
Graph contrastive learning (GCL) aims to learn representations via instance discrimination.
It relies on graph augmentation to reflect invariant patterns that are robust to small perturbations.
Recent studies mainly perform topology augmentations in a uniformly random manner in the spatial domain.
We develop spectral augmentation which guides topology augmentations by maximizing the spectral change.
arXiv Detail & Related papers (2022-10-02T22:20:07Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Augmentation-Free Graph Contrastive Learning [16.471928573824854]
Graph contrastive learning (GCL) is the most representative and prevalent self-supervised learning approach for graph-structured data.
Existing GCL methods rely on an augmentation scheme to learn the representations invariant across different augmentation views.
We propose a novel, theoretically-principled, and augmentation-free GCL, named AF-GCL, that leverages the features aggregated by Graph Neural Network to construct the self-supervision signal instead of augmentations.
arXiv Detail & Related papers (2022-04-11T05:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.