Spectral Augmentation for Self-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2210.00643v2
- Date: Tue, 20 Jun 2023 18:24:52 GMT
- Title: Spectral Augmentation for Self-Supervised Learning on Graphs
- Authors: Lu Lin, Jinghui Chen, Hongning Wang
- Abstract summary: Graph contrastive learning (GCL) aims to learn representations via instance discrimination.
It relies on graph augmentation to reflect invariant patterns that are robust to small perturbations.
Recent studies mainly perform topology augmentations in a uniformly random manner in the spatial domain.
We develop spectral augmentation which guides topology augmentations by maximizing the spectral change.
- Score: 43.19199994575821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL), as an emerging self-supervised learning
technique on graphs, aims to learn representations via instance discrimination.
Its performance heavily relies on graph augmentation to reflect invariant
patterns that are robust to small perturbations; yet it still remains unclear
about what graph invariance GCL should capture. Recent studies mainly perform
topology augmentations in a uniformly random manner in the spatial domain,
ignoring its influence on the intrinsic structural properties embedded in the
spectral domain. In this work, we aim to find a principled way for topology
augmentations by exploring the invariance of graphs from the spectral
perspective. We develop spectral augmentation which guides topology
augmentations by maximizing the spectral change. Extensive experiments on both
graph and node classification tasks demonstrate the effectiveness of our method
in self-supervised representation learning. The proposed method also brings
promising generalization capability in transfer learning, and is equipped with
intriguing robustness property under adversarial attacks. Our study sheds light
on a general principle for graph topology augmentation.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Heterogeneous Graph Contrastive Learning with Spectral Augmentation [15.231689595121553]
This paper introduces a spectral-enhanced graph contrastive learning model (SHCL) for the first time in heterogeneous graph neural networks.
The proposed model learns an adaptive topology augmentation scheme through the heterogeneous graph itself.
Experimental results on multiple real-world datasets demonstrate substantial advantages of the proposed model.
arXiv Detail & Related papers (2024-06-30T14:20:12Z) - Through the Dual-Prism: A Spectral Perspective on Graph Data
Augmentation for Graph Classification [71.36575018271405]
We introduce the Dual-Prism (DP) augmentation method, comprising DP-Noise and DP-Mask.
We find that keeping the low-frequency eigenvalues unchanged can preserve the critical properties at a large scale when generating augmented graphs.
arXiv Detail & Related papers (2024-01-18T12:58:53Z) - Spectral-Aware Augmentation for Enhanced Graph Representation Learning [10.36458924914831]
We present GASSER, a model that applies tailored perturbations to specific frequencies of graph structures in the spectral domain.
Through extensive experimentation and theoretical analysis, we demonstrate that the augmentation views generated by GASSER are adaptive, controllable, and intuitively aligned with the homophily ratios and spectrum of graph structures.
arXiv Detail & Related papers (2023-10-20T22:39:07Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.