Spectral Augmentation for Self-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2210.00643v2
- Date: Tue, 20 Jun 2023 18:24:52 GMT
- Title: Spectral Augmentation for Self-Supervised Learning on Graphs
- Authors: Lu Lin, Jinghui Chen, Hongning Wang
- Abstract summary: Graph contrastive learning (GCL) aims to learn representations via instance discrimination.
It relies on graph augmentation to reflect invariant patterns that are robust to small perturbations.
Recent studies mainly perform topology augmentations in a uniformly random manner in the spatial domain.
We develop spectral augmentation which guides topology augmentations by maximizing the spectral change.
- Score: 43.19199994575821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning (GCL), as an emerging self-supervised learning
technique on graphs, aims to learn representations via instance discrimination.
Its performance heavily relies on graph augmentation to reflect invariant
patterns that are robust to small perturbations; yet it still remains unclear
about what graph invariance GCL should capture. Recent studies mainly perform
topology augmentations in a uniformly random manner in the spatial domain,
ignoring its influence on the intrinsic structural properties embedded in the
spectral domain. In this work, we aim to find a principled way for topology
augmentations by exploring the invariance of graphs from the spectral
perspective. We develop spectral augmentation which guides topology
augmentations by maximizing the spectral change. Extensive experiments on both
graph and node classification tasks demonstrate the effectiveness of our method
in self-supervised representation learning. The proposed method also brings
promising generalization capability in transfer learning, and is equipped with
intriguing robustness property under adversarial attacks. Our study sheds light
on a general principle for graph topology augmentation.
Related papers
- AS-GCL: Asymmetric Spectral Augmentation on Graph Contrastive Learning [25.07818336162072]
Graph Contrastive Learning (GCL) has emerged as the foremost approach for self-supervised learning on graph-structured data.
We propose a novel paradigm called AS-GCL that incorporates asymmetric spectral augmentation for graph contrastive learning.
Our method introduces significant enhancements to each of these components.
arXiv Detail & Related papers (2025-02-19T08:22:57Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Rethinking Spectral Augmentation for Contrast-based Graph Self-Supervised Learning [10.803503272887173]
Methods grounded in seemingly conflicting assumptions regarding the spectral domain demonstrate notable enhancements in learning performance.
This suggests that the computational overhead of sophisticated spectral augmentations may not justify their practical benefits.
The proposed insights represent a significant leap forward in the field, potentially refining the understanding and implementation of graph self-supervised learning.
arXiv Detail & Related papers (2024-05-30T01:30:34Z) - Through the Dual-Prism: A Spectral Perspective on Graph Data Augmentation for Graph Classification [67.35058947477631]
We introduce Dual-Prism (DP) augmentation methods, including DP-Noise and DP-Mask, which retain essential graph properties while diversifying augmented graphs.
Extensive experiments validate the efficiency of our approach, providing a new and promising direction for graph data augmentation.
arXiv Detail & Related papers (2024-01-18T12:58:53Z) - Spectral-Aware Augmentation for Enhanced Graph Representation Learning [10.36458924914831]
We present GASSER, a model that applies tailored perturbations to specific frequencies of graph structures in the spectral domain.
Through extensive experimentation and theoretical analysis, we demonstrate that the augmentation views generated by GASSER are adaptive, controllable, and intuitively aligned with the homophily ratios and spectrum of graph structures.
arXiv Detail & Related papers (2023-10-20T22:39:07Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.